CN115861275B - Cell counting method, cell counting device, terminal equipment and medium - Google Patents
Cell counting method, cell counting device, terminal equipment and medium Download PDFInfo
- Publication number
- CN115861275B CN115861275B CN202211674351.1A CN202211674351A CN115861275B CN 115861275 B CN115861275 B CN 115861275B CN 202211674351 A CN202211674351 A CN 202211674351A CN 115861275 B CN115861275 B CN 115861275B
- Authority
- CN
- China
- Prior art keywords
- image
- matrix
- target cells
- image feature
- block
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 71
- 239000011159 matrix material Substances 0.000 claims abstract description 147
- 238000012545 processing Methods 0.000 claims abstract description 37
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 20
- 230000000873 masking effect Effects 0.000 claims abstract description 12
- 238000004590 computer program Methods 0.000 claims description 19
- 239000013598 vector Substances 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 10
- 230000000903 blocking effect Effects 0.000 claims description 9
- 230000006870 function Effects 0.000 claims description 7
- 238000003064 k means clustering Methods 0.000 claims description 7
- 238000010276 construction Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 5
- 230000004913 activation Effects 0.000 claims description 4
- 238000010606 normalization Methods 0.000 claims description 3
- 238000012163 sequencing technique Methods 0.000 claims description 2
- 238000001514 detection method Methods 0.000 abstract description 8
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 abstract description 6
- 230000008569 process Effects 0.000 description 13
- 244000005700 microbiome Species 0.000 description 9
- 239000010865 sewage Substances 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 5
- 230000004044 response Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000003912 environmental pollution Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 239000012535 impurity Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
The application is applicable to the technical field of water quality detection and provides a cell counting method, a cell counting device, terminal equipment and a medium. The method comprises the steps of dividing original image data into blocks to obtain a plurality of image blocks; adding sequence information to each image block, extracting image characteristics of the image blocks, and obtaining an image characteristic set; constructing a first image feature association matrix from the image feature set; constructing an image feature mask matrix according to the target frame position information of the target cells and the first image feature association matrix; processing the superside corresponding to the image block without the target cells by using the image feature masking matrix to obtain a second image feature association matrix; constructing an image feature hypergraph according to the second image feature association matrix; processing the image characteristic hypergraph by using a hypergraph convolutional neural network to obtain the density of target cells in the original image; and acquiring the number of the target cells in the original image according to the density of the target cells. The application can improve the accuracy of cell counting.
Description
Technical Field
The application belongs to the technical field of water quality detection, and particularly relates to a cell counting method, a cell counting device, terminal equipment and a medium.
Background
The water quality detection can provide data and experience for the treatment process problem of the sewage treatment plant, and help professionals to make correct judgment, so that reasonable operation scheme of the sewage treatment plant is designed and formulated, the water outlet problem of the sewage treatment plant is finally and effectively improved, and environmental pollution is reduced.
The untreated sewage mixed liquid sample contains a large amount of floccules and impurities in microscopic images, and the detection and statistics of target cells can be greatly interfered. It is necessary to filter and pretreat the sewage mixture in a laboratory in order to observe target cells. However, this series of operations requires high cost, so that a cell counting technique using microscopic images is required for water quality detection: the quality of the water body is finally evaluated according to the above data by carrying out carrier recognition on the microscopic image of the sewage (in the microscopic image, a partial sample of one microorganism is selected by using the carrier, the microorganism is distinguished from a background picture or other microorganisms according to the characteristics of the carrier) so as to monitor and determine the type and concentration of the microorganism and predict the actual quantity of the microorganism in a particle-free state.
The cell counting method of the current mainstream microscopic image is a target counting method facing to a general image, and only features of target cells are simply spliced with picture features to obtain features of the target cell image. However, the target cells in the microscopic image are generally small in volume, and many other cells with similar shapes exist in the background, which has great interference on the counting of the target cells, and this results in easy extraction of blank areas in the background picture and the characteristics of other kinds of microorganisms, thus leading to recognition of other unrelated objects and erroneous statistics of other microorganisms. Thus, the accuracy of current cell counting methods is low.
Disclosure of Invention
The embodiment of the application provides a cell counting method, a cell counting device, terminal equipment and a medium, which can solve the problem of low accuracy of the cell counting method.
In a first aspect, embodiments of the present application provide a cell counting method comprising:
the method comprises the steps of performing blocking processing on original image data to obtain a plurality of image blocks;
adding sequence information to each image block in the plurality of image blocks, and extracting image features of the image blocks added with the sequence information to obtain an image feature set;
constructing a first image feature association matrix according to the image feature set;
constructing an image feature mask matrix according to target frame position information of target cells preset in the original image data and the first image feature association matrix;
processing the superside corresponding to the image block without the target cells in the first image feature correlation matrix by utilizing the image feature masking matrix to obtain a second image feature correlation matrix;
constructing an image feature hypergraph according to the second image feature association matrix;
processing the image characteristic hypergraph by using a hypergraph convolutional neural network to obtain the density of target cells in the original image;
and obtaining the number of the target cells in the original image according to the density of the target cells in the original image.
Optionally, the original image data includes a plurality of original images.
Optionally, the block processing is performed on the original image data to obtain a plurality of image blocks, including:
performing blocking treatment on a plurality of original images X according to the diameter h of the target cells; wherein each original image is divided into p image blocks, x= [ X ] 1 ,x 2 ,...,x i ,...,x m ],x i Represents the i-th original image, m represents the total number of original images, p= (h×w)/(p×p), P is equal to or greater than H, P represents the side length of each image block, W represents the original image width, H represents the original image height, i=1, 2.
Optionally, adding sequence information to each of the plurality of image blocks, and extracting image features of the image blocks to which the sequence information is added to obtain an image feature set, including:
by calculation formula
Obtaining all image blocks z after adding sequence information to the ith original image i The method comprises the steps of carrying out a first treatment on the surface of the Wherein,representing all image blocks corresponding to the i-th original image, a +_>x iq The q-th image block representing the i-th original image, the PE represents the sequence information and, q=1, 2, p.
By calculation formula
Obtaining image characteristics of each image block after adding sequence information to the ith original imageWherein (1)>Representing the image characteristics of the qth image block of the mth original image added with sequence information in the mth block encoder; MSA (-) represents multi-head self-attention, LN (-) represents layer normalization, MLP represents multi-layer perceptron, m represents total block number of encoder, (-)>Representing the image characteristics of the q-th image block after the sequence information is added to the i-th original image in the m-1-th block encoder;
sequencing the image features of each image block after adding the sequence information into each original image according to the sequence information of the image blocks to obtain an image feature set F; wherein F= { F 1 ,F 2 ,...,F i ,...,F m },F i Image feature set representing the ith original image, F i ={F i1 ,F i2 ,...,F ip },F i ∈R p×d R represents real space and d represents the feature dimension of each image block.
Optionally, constructing a first image feature association matrix according to the image feature set includes:
taking the image feature corresponding to each image block in the image feature set F as a node, and clustering a plurality of nodes by using a K-Means clustering method to obtain a plurality of clusters;
taking each cluster in the clusters as a superside, and constructing an image block characteristic supergraph according to the supersides;
and obtaining a first image characteristic association matrix H according to the image block characteristic hypergraph.
Optionally, constructing an image feature mask matrix according to target frame position information of target cells preset in the original image data and the first image feature association matrix, including:
according to the target frame position information of the target cells preset in each original image and a plurality of image blocks corresponding to the original image, performing block calibration on the target frames of the target cells in the original image to obtain target frame position vectors L of the target cells in the original image i ,L i A target frame position vector representing a target cell in the i-th original image; wherein L is i ∈R 1 ×p ;
Calculating a target frame position vector L of a target cell in each original image i Obtaining an image feature mask matrix M by multiplying the first image feature incidence matrix H;wherein M is E R p×e E represents the number of superedges.
Optionally, processing the superside corresponding to the image block without the target cell in the first image feature correlation matrix by using the image feature mask matrix to obtain a second image feature correlation matrix, including:
performing dot multiplication on the image feature masking matrix M and the first image feature association matrix H to obtain a dot multiplication result matrix;
removing the superside corresponding to the image block which does not contain the target cells in the dot-product result matrix, and increasing the weight of the superside corresponding to the image block which contains the target cells in the dot-product result matrix;
and calculating the correlation matrix of the supersides corresponding to all the image blocks containing the target cells according to the weights of the supersides corresponding to the image blocks containing the target cells, so as to obtain a second image characteristic correlation matrix A.
Optionally, processing the image feature hypergraph by using a hypergraph convolutional neural network to obtain the density of the target cells in the original image, including:
by calculation formula
Obtaining a density prediction matrix of target cells, wherein X (l) Image characteristic hypergraph matrix representing input of first layer of hypergraph convolutional neural network and X (l+1) Representing the hypergraph matrix, θ, of the image features of the input of layer l+1 (l) The weight of the first layer of the hypergraph convolutional neural network is represented, W represents a weight matrix of the hyperedges corresponding to the image blocks which contain the target cells and are reinforced with the weight, and D v Degree matrix representing nodes, D e A degree matrix representing the superside, sigma represents an activation function, and A represents a second image feature association matrix;
and obtaining the density of the target cells in the original image according to the density prediction matrix of the target cells.
In a second aspect, embodiments of the present application provide a cell counting device comprising:
the block module is used for carrying out block processing on the original image data to obtain a plurality of image blocks;
the extraction module is used for adding sequence information to each image block in the plurality of image blocks, and extracting the image characteristics of the image blocks added with the sequence information to obtain an image characteristic set;
the first construction module is used for constructing a first image feature association matrix according to the image feature set;
the second construction module is used for constructing an image feature mask matrix according to target frame position information of target cells preset in the original image data and the first image feature association matrix;
the first processing module is used for processing the superside corresponding to the image block without the target cells in the first image feature association matrix by utilizing the image feature mask matrix to obtain a second image feature association matrix;
the hypergraph module is used for constructing an image characteristic hypergraph according to the second image characteristic association matrix;
the second processing module is used for processing the image characteristic hypergraph by using the hypergraph convolutional neural network to obtain the density of target cells in the original image;
and the counting module is used for acquiring the number of the target cells in the original image according to the density of the target cells in the original image.
In a third aspect, embodiments of the present application provide a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the cell counting method described above when executing the computer program.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium storing a computer program which when executed by a processor implements the cell counting method described above.
The scheme of the application has the following beneficial effects:
in some embodiments of the present application, by adding sequence information to each image block in the plurality of image blocks, position information of each image block corresponding to its original image can be obtained, so that extracted image features are more accurate, and thus accuracy of cell counting is improved; the image feature masking matrix is utilized to process the superside corresponding to the image blocks without the target cells in the first image feature association matrix, so that feature flow among the image blocks containing the target cells can be promoted, correlation among the target cells is enhanced, and accuracy of cell counting is improved.
Other advantages of the present application will be described in detail in the detailed description section that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following description will briefly introduce the drawings that are needed in the embodiments or the description of the prior art, it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for cell counting according to one embodiment of the present application;
FIG. 2 is a schematic diagram of a cell counting apparatus according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
In addition, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
Aiming at the problem of low accuracy of the current cell counting method, the application provides a cell counting method, which can obtain the position information of each image block corresponding to the original image of each image block by adding sequence information to each image block in a plurality of image blocks, so that the extracted image features are more accurate, and the accuracy of cell counting is improved; the image feature masking matrix is utilized to process the superside corresponding to the image blocks without the target cells in the first image feature association matrix, so that feature flow among the image blocks containing the target cells can be promoted, correlation among the target cells is enhanced, and accuracy of cell counting is improved.
As shown in fig. 1, the cell counting method provided in the present application includes the following steps:
and step 11, carrying out blocking processing on the original image data to obtain a plurality of image blocks.
In some embodiments of the present application, the raw image data may be a plurality of raw images obtained by upsampling (a technical means of sampling a low resolution image into a high resolution image), and illustratively, the raw image data may be a plurality of microscopic images.
The original image data is subjected to blocking treatment, so that the image characteristics of the original image data are conveniently extracted subsequently, the target cells and other cells are distinguished, the image area without the cells can be better treated, and the interference of the factors (other cells and the image area without the cells) on the counting of the target cells is reduced.
The specific steps of the step 11 are as follows: performing blocking treatment on a plurality of original images X according to the diameter h of the target cells; wherein each original image is divided into p image blocks, x= [ X ] 1 ,x 2 ,...,x i ,...,x m ],x i Represents the i-th original image, m represents the total number of original images, p= (h×w)/(p×p), P is equal to or greater than H, P represents the side length of each image block, W represents the original image width, H represents the original image height, i=1, 2.
It is worth mentioning that the original image is segmented by the diameter of the target cells, so that the extracted image features are cell features rather than background features as much as possible, and the error of cell count is reduced. The extraction of image features also belongs to an extracted feature technique (a method and a process for extracting, by a computer, by image analysis and transformation, characteristic information of a microorganism belonging to a microscopic image, starting from an initial set of measurement data and establishing derived values (features) intended to provide information and non-redundancy, so as to improve the effect of the subsequent learning and generalization processes and in some cases bring about better interpretability).
And step 12, adding sequence information to each image block in the plurality of image blocks, and extracting the image characteristics of the image blocks added with the sequence information to obtain an image characteristic set.
It is worth mentioning that adding sequence information to each image block can make the image feature contain the position information of its corresponding original image, which is beneficial to the accuracy of cell counting. The size of the sequence information is uniform for all image blocks.
And 13, constructing a first image feature association matrix according to the image feature set.
And 14, constructing an image feature mask matrix according to the target frame position information of the target cells preset in the original image data and the first image feature association matrix.
The image feature masking matrix can query the superside corresponding to the cell without the target cell in the image feature association matrix and remove the superside, so that interference of blank background and other cells on the target cell count can be eliminated.
And 15, processing the superside corresponding to the image block without the target cells in the first image feature correlation matrix by using the image feature mask matrix to obtain a second image feature correlation matrix.
And step 16, constructing an image feature hypergraph according to the second image feature association matrix.
Hypergraphs are a subset system of finite sets, the most common structure in discrete mathematics. The definition is as follows: hypergraph H is an ordered binary group h= (X, E), where X is a non-empty set of nodes or vertices (vertices) as elements, called a vertex set; e is a set of non-empty sub-clusters of X, the elements of which are referred to as edges or superedges.
And step 17, processing the image characteristic hypergraph by using the hypergraph convolutional neural network to obtain the density of target cells in the original image.
In the field of computer vision, convolution is the extraction of image features or the matching of information. For example, when we train to distinguish microscopic images, the convolution kernel will be trained, and as a result of training, the convolution kernel will be sensitive to different characteristics among microorganisms, and different results are output, so that the purpose of image recognition is achieved.
And step 18, acquiring the number of target cells in the original image according to the density of the target cells in the original image.
In some scenarios, the cell number may be determined using cell density using currently common techniques or devices to complete the counting of target cells in the original image.
As can be seen from the above steps, the cell counting method provided by the present application can obtain the position information of each image block corresponding to its original image by adding sequence information to each image block in the plurality of image blocks, so that the extracted image features are more accurate, and the accuracy of cell counting is improved; the image feature masking matrix is utilized to process the superside corresponding to the image blocks without the target cells in the first image feature association matrix, so that feature flow among the image blocks containing the target cells can be promoted, correlation among the target cells is enhanced, and accuracy of cell counting is improved.
The following describes an exemplary procedure of step 12 (adding sequence information to each of the plurality of image blocks, and extracting image features of the image block to which the sequence information is added, to obtain an image feature set).
Step 12.1, by calculation formula
Obtaining all image blocks z after adding sequence information to the ith original image i The method comprises the steps of carrying out a first treatment on the surface of the Wherein,representing all image blocks corresponding to the i-th original image, a +_>x iq The q-th image block representing the i-th original image, PE represents sequence information, q=1, 2.
Step 12.2, through the calculation formula
Obtaining image characteristics of each image block after adding sequence information to the ith original imageWherein (1)>Representing the image characteristics of the qth image block of the mth original image added with sequence information in the mth block encoder; MSA (-) represents multi-head self-attention, LN (-) represents layer normalization, MLP represents multi-layer perceptron, m represents total block number of encoder, (-)>Representing the image characteristics of the q-th image block of the m-1 th original image added with sequence information in the m-1 st block encoder.
And step 12.3, sorting the image characteristics of each image block after adding the sequence information into each original image according to the sequence information of the image blocks to obtain an image characteristic set F.
Wherein F= { F 1 ,F 2 ,...,F i ,...F m },F i Image feature set representing the ith original image, F i ={F i1 ,F i2 ,...,F ip },F i ∈R p×d R represents real space and d represents the feature dimension of each image block.
The specific procedure of step 13 (constructing the first image feature correlation matrix from the image feature set) is exemplarily described below.
In some embodiments of the present application, the image feature set obtained in step 12 may be one-dimensionally convolved to aggregate the image features in the respective image blocks to enhance the local expressive power of the image features, before step 13 is performed.
And 13.1, taking the image feature corresponding to each image block in the image feature set F as a node, and clustering a plurality of nodes by using a K-Means clustering method to obtain a plurality of clusters.
The K-Means clustering method is called a K-Means clustering algorithm (K-Means clustering algorithm), and is a clustering analysis algorithm for iterative solution, and the method comprises the steps of pre-dividing data into K groups, randomly selecting K objects as initial clustering centers, calculating the distance between each object and each seed clustering center, and distributing each object to the closest clustering center. The cluster centers and the objects assigned to them represent a cluster. For each sample assigned, the cluster center of the cluster is recalculated based on the existing objects in the cluster. This process will repeat until a certain termination condition is met. The termination condition may be that no (or a minimum number of) objects are reassigned to different clusters, no (or a minimum number of) cluster centers are changed again, and the sum of squares of errors is locally minimum.
And 13.2, taking each cluster in the plurality of clusters as a superside, and constructing an image block characteristic supergraph according to the plurality of supersides.
And 13.3, obtaining a first image feature association matrix H according to the image block feature hypergraph.
The following describes an exemplary procedure of step 14 (constructing an image feature mask matrix based on the target frame position information of the target cells and the first image feature correlation matrix set in advance in the original image data).
Step 14.1, presetting according to each original imageThe target frame position information of the target cells and a plurality of image blocks corresponding to the original image are arranged, and the target frames of the target cells in the original image are subjected to block calibration to obtain a target frame position vector L of the target cells in the original image i ,L i A target frame position vector representing a target cell in the i-th original image.
Wherein L is i ∈R 1×p 。
It should be noted that the i-th image is divided into j blocks and corresponds to a one-dimensional vector [0, 1, ], 1,0], and if there is a target frame in the j-th image block, the j-th number in the vector is 1, otherwise, is 0.
Step 14.2, calculating the target frame position vector L of the target cells in each original image i Obtaining an image feature mask matrix M by multiplying the first image feature incidence matrix H; wherein M is E R p×e E represents the number of superedges.
The following describes an exemplary procedure of step 15 (using the image feature mask matrix, processing the superside corresponding to the image block without the target cell in the first image feature correlation matrix to obtain the second image feature correlation matrix).
And 15.1, performing dot multiplication on the image feature mask matrix M and the first image feature association matrix H to obtain a dot multiplication result matrix.
And 15.2, removing the superside corresponding to the image block which does not contain the target cells in the dot-product result matrix, and increasing the weight of the superside corresponding to the image block which contains the target cells in the dot-product result matrix.
And 15.3, calculating the correlation matrix of the supersides corresponding to all the image blocks containing the target cells according to the weights of the supersides corresponding to the image blocks containing the target cells, so as to obtain a second image characteristic correlation matrix A.
The following describes an exemplary procedure of step 17 (processing the image feature hypergraph using the hypergraph convolutional neural network to obtain the density of the target cells in the original image).
Step 17.1, by calculation formula
Obtaining a density prediction matrix of target cells, wherein X (l) Image characteristic hypergraph matrix representing input of first layer of hypergraph convolutional neural network and X (l+1) Representing the hypergraph matrix, θ, of the image features of the input of layer l+1 (l) The weight of the first layer of the hypergraph convolutional neural network is represented, W represents a weight matrix of the hyperedges corresponding to the image blocks which contain the target cells and are reinforced with the weight, and D v Degree matrix representing nodes, D e Representing a degree matrix of the superside, sigma representing an activation function, and a representing a second image feature correlation matrix.
And 17.2, obtaining the density of the target cells in the original image according to the density prediction matrix of the target cells in the original image.
The cell counting device provided in the present application is exemplified below in connection with specific embodiments.
As shown in fig. 2, embodiments of the present application provide a cell counting device 200 comprising:
the blocking module 201 is configured to perform blocking processing on the original image data to obtain a plurality of image blocks.
The extracting module 202 is configured to add sequence information to each of the plurality of image blocks, and extract image features of the image blocks to which the sequence information is added, to obtain an image feature set.
A first construction module 203, configured to construct a first image feature association matrix according to the image feature set.
The second construction module 204 is configured to construct an image feature mask matrix according to the target frame position information of the target cell and the first image feature association matrix, which are preset in the original image data.
The first processing module 205 is configured to process, using the image feature mask matrix, a superside corresponding to an image block that does not contain the target cell in the first image feature correlation matrix, so as to obtain a second image feature correlation matrix.
And a hypergraph module 206, configured to construct an image feature hypergraph according to the second image feature association matrix.
The second processing module 207 processes the image feature hypergraph by using the hypergraph convolutional neural network to obtain the density of the target cells in the original image.
The counting module 208 is configured to obtain the number of target cells in the original image according to the density of the target cells in the original image.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
As shown in fig. 3, an embodiment of the present application provides a terminal device, a terminal device D10 of which includes: at least one processor D100 (only one processor is shown in fig. 3), a memory D101 and a computer program D102 stored in the memory D101 and executable on the at least one processor D100, the processor D100 implementing the steps in any of the various method embodiments described above when executing the computer program D102.
Specifically, when the processor D100 executes the computer program D102, the original image data is subjected to block processing to obtain a plurality of image blocks, sequence information is added to each image block, image features of the image blocks added with the sequence information are extracted to obtain an image feature set, a first image feature correlation matrix is constructed according to the image feature set, an image feature mask matrix is constructed according to target frame position information of preset target cells and the first image feature correlation matrix, superedges corresponding to the image blocks without the target cells in the first image feature correlation matrix are processed to obtain a second image feature correlation matrix, an image feature supergraph is constructed according to the second image feature correlation matrix, the image feature supergraph is processed by using a supergraph convolutional neural network to obtain the density of the target cells in the original image, and finally the number of the target cells in the original image is obtained according to the density of the target cells in the original image. The method comprises the steps of adding sequence information to each image block in a plurality of image blocks to obtain the position information of each image block corresponding to an original image of the image block, so that the extracted image features are more accurate, and the accuracy of cell counting is improved; the image feature masking matrix is utilized to process the superside corresponding to the image blocks without the target cells in the first image feature association matrix, so that feature flow among the image blocks containing the target cells can be promoted, correlation among the target cells is enhanced, and accuracy of cell counting is improved.
The processor D100 may be a central processing unit (CPU, central Processing Unit), the processor D100 may also be other general purpose processors, digital signal processors (DSP, digital Signal Processor), application specific integrated circuits (ASIC, application Specific Integrated Circuit), off-the-shelf programmable gate arrays (FPGA, field-Programmable Gate Array) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory D101 may in some embodiments be an internal storage unit of the terminal device D10, for example a hard disk or a memory of the terminal device D10. The memory D101 may also be an external storage device of the terminal device D10 in other embodiments, for example, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal device D10. Further, the memory D101 may also include both an internal storage unit and an external storage device of the terminal device D10. The memory D101 is used for storing an operating system, an application program, a boot loader (BootLoader), data, other programs, etc., such as program codes of the computer program. The memory D101 may also be used to temporarily store data that has been output or is to be output.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements steps that may implement the various method embodiments described above.
The present embodiments provide a computer program product which, when run on a terminal device, causes the terminal device to perform steps that enable the respective method embodiments described above to be implemented.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application implements all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a cell counting device/terminal device, recording medium, computer Memory, read-Only Memory (ROM), random access Memory (RAM, random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
The cell counting method provided by the application has the following advantages:
1. the image is diced according to the known cell diameter, and position information is added to each image block, so that characteristics are extracted, and the obtained characteristics contain the position information of the corresponding original image, so that the image is more accurate and effective.
2. In some embodiments of the present application, the features extracted from each image block are convolved in one dimension to aggregate the features within the image block, which may enhance the local expression of features in the microscopic image that contain only cellular portions, to better distinguish from the background.
3. And clustering the image blocks into clusters by using a K-Means algorithm, and constructing a hypergraph by taking the clusters as hyperedges and the image blocks as points. The superside not containing cells is queried and removed by using the mask matrix formed by the target frame position information, and the weight of the superside containing cells is increased. This promotes feature flow between image patches containing cells, thereby improving the effectiveness and accuracy of cell counting.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
While the foregoing is directed to the preferred embodiments of the present application, it should be noted that modifications and adaptations to those embodiments may occur to one skilled in the art and that such modifications and adaptations are intended to be comprehended within the scope of the present application without departing from the principles set forth herein.
Claims (8)
1. A method of cell counting comprising:
the method comprises the steps of performing blocking processing on original image data to obtain a plurality of image blocks;
adding sequence information to each image block in the plurality of image blocks, and extracting image features of the image blocks added with the sequence information to obtain an image feature set;
constructing a first image feature association matrix according to the image feature set; the constructing a first image feature association matrix according to the image feature set comprises the following steps: taking the image feature corresponding to each image block in the image feature set F as a node, and clustering a plurality of nodes by using a K-Means clustering method to obtain a plurality of clusters; taking each cluster in the clusters as a superside, and constructing an image block characteristic supergraph according to the supersides; obtaining a first image feature association matrix H according to the image block feature hypergraph;
constructing an image feature mask matrix according to target frame position information of target cells preset in the original image data and the first image feature association matrix;
processing the superside corresponding to the image block without the target cells in the first image feature correlation matrix by using the image feature masking matrix to obtain a second image feature correlation matrix;
constructing an image feature hypergraph according to the second image feature association matrix;
processing the image characteristic hypergraph by using a hypergraph convolutional neural network to obtain the density of target cells in the original image; the processing of the image feature hypergraph by using the hypergraph convolutional neural network to obtain the density of target cells in the original image comprises the following steps: by calculation formula
Obtaining a density prediction matrix of target cells, wherein X (l) Representing an image characteristic hypergraph matrix, X of the input of the first layer of the hypergraph convolutional neural network (l+1) Representing the hypergraph matrix, θ, of the image features of the input of layer l+1 (l) The weight of the first layer of the hypergraph convolutional neural network is represented, W represents a weight matrix of the hyperedges corresponding to the image blocks which contain the target cells and are reinforced with the weight, and D v Degree matrix representing nodes, D e A degree matrix representing an overtravel, sigma representing an activation function, and A representing the second image feature association matrix; acquiring the density of the target cells in the original image according to the density prediction matrix of the target cells;
and acquiring the number of target cells in the original image according to the density of the target cells in the original image.
2. The cell counting method of claim 1, wherein the raw image data comprises a plurality of raw images;
the block processing is performed on the original image to obtain a plurality of image blocks, including:
performing blocking treatment on a plurality of original images X according to the diameter h of the target cells; wherein each original image is divided into p image blocks, x= [ X ] 1 ,x 2 ,...,x i ,...,x m ],x i Represents the i-th original image, m represents the total number of original images, p= (h×w)/(p×p), P is equal to or greater than H, P represents the side length of each image block, W represents the original image width, H represents the original image height, i=1, 2.
3. The cell counting method according to claim 2, wherein adding sequence information to each of the plurality of image blocks and extracting image features of the image blocks to which the sequence information is added to obtain an image feature set includes:
by calculation formula
Obtaining all image blocks z after adding sequence information to the ith original image i The method comprises the steps of carrying out a first treatment on the surface of the Wherein,representing all image blocks corresponding to the i-th original image, a +_>x iq The (q) th image block representing the (i) th original image, and PE representing the sequence information, q=1,2,...,p;
By calculation formula
Obtaining the image characteristics of each image block after adding sequence information to the ith original imageWherein (1)>Representing the image characteristics of the q-th image block after the sequence information is added to the i-th original image in the m-th block encoder; MSA (-) represents multi-head self-attention, LN (-) represents layer normalization, MLP represents multi-layer perceptron, m represents total block number of encoder, (-)>Representing the image characteristics of the q-th image block after the sequence information is added to the i-th original image in the m-1-th block encoder;
sequencing the image features of each image block after adding sequence information to each original image according to the sequence information of the image blocks to obtain the image feature set F; wherein F= { F 1 ,F 2 ,...,F i ,...,F m },F i Image feature set representing the ith original image, F i ={F i1 ,F i2 ,...,F ip },F i ∈R p×d R represents real space and d represents the feature dimension of each image block.
4. The method of claim 1, wherein constructing an image feature mask matrix from the target frame position information of the target cells in the original image and the first image feature correlation matrix comprises:
according to the target frame position information of the target cells preset in each original image and a plurality of image blocks corresponding to the original image, performing block calibration on the target frames of the target cells in the original image to obtain target frame position vectors L of the target cells in the original image i ,L i A target frame position vector representing a target cell in the i-th original image; wherein L is i ∈R 1×p ;
Calculating a target frame position vector L of a target cell in each original image i Obtaining the image feature masking matrix M by multiplying the first image feature incidence matrix H; wherein M is E R p×e E represents the number of superedges.
5. The cell counting method according to claim 4, wherein the processing the corresponding superside of the image block without the target cell in the first image feature correlation matrix by using the image feature mask matrix to obtain a second image feature correlation matrix includes:
performing dot multiplication on the image feature masking matrix M and the first image feature association matrix H to obtain a dot multiplication result matrix;
removing the superfby corresponding to the image block which does not contain the target cells in the dot product result matrix, and increasing the weight of the superfby corresponding to the image block which contains the target cells in the dot product result matrix;
and calculating the correlation matrix of the supersides corresponding to all the image blocks containing the target cells according to the weights of the supersides corresponding to the image blocks containing the target cells to obtain the second image characteristic correlation matrix A.
6. A cell counting apparatus, comprising:
the block module is used for carrying out block processing on the original image data to obtain a plurality of image blocks;
the extraction module is used for adding sequence information to each image block in the plurality of image blocks and extracting the image characteristics of the image blocks added with the sequence information to obtain an image characteristic set;
the first construction module is used for constructing a first image feature association matrix according to the image feature set; the constructing a first image feature association matrix according to the image feature set comprises the following steps: taking the image feature corresponding to each image block in the image feature set F as a node, and clustering a plurality of nodes by using a K-Means clustering method to obtain a plurality of clusters; taking each cluster in the clusters as a superside, and constructing an image block characteristic supergraph according to the supersides; obtaining a first image feature association matrix H according to the image block feature hypergraph;
the second construction module is used for constructing an image feature mask matrix according to the target frame position information of the target cells preset in the original image data and the first image feature association matrix;
the first processing module is used for processing the superside corresponding to the image block without the target cells in the first image feature correlation matrix by utilizing the image feature masking matrix to obtain a second image feature correlation matrix;
the hypergraph module is used for constructing an image characteristic hypergraph according to the second image characteristic association matrix;
the second processing module is used for processing the image characteristic hypergraph by using a hypergraph convolutional neural network to obtain the density of target cells in the original image; the processing of the image feature hypergraph by using the hypergraph convolutional neural network to obtain the density of target cells in the original image comprises the following steps: by calculation formula
Obtaining a density prediction matrix of target cells, wherein X (l) Representing an image characteristic hypergraph matrix, X of the input of the first layer of the hypergraph convolutional neural network (l+1) Representing the hypergraph matrix, θ, of the image features of the input of layer l+1 (l) The weight of the first layer of the hypergraph convolutional neural network is represented, W represents a weight matrix of the hyperedges corresponding to the image blocks which contain the target cells and are reinforced with the weight, and D v Degree matrix representing nodes, D e A degree matrix representing an overtravel, sigma representing an activation function, and A representing the second image feature association matrix; acquiring the density of the target cells in the original image according to the density prediction matrix of the target cells;
and the counting module is used for acquiring the number of the target cells in the original image according to the density of the target cells in the original image.
7. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the cell counting method according to any one of claims 1 to 5 when executing the computer program.
8. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the cell counting method according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211674351.1A CN115861275B (en) | 2022-12-26 | 2022-12-26 | Cell counting method, cell counting device, terminal equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211674351.1A CN115861275B (en) | 2022-12-26 | 2022-12-26 | Cell counting method, cell counting device, terminal equipment and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115861275A CN115861275A (en) | 2023-03-28 |
CN115861275B true CN115861275B (en) | 2024-02-06 |
Family
ID=85654743
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211674351.1A Active CN115861275B (en) | 2022-12-26 | 2022-12-26 | Cell counting method, cell counting device, terminal equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115861275B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101542527A (en) * | 2006-11-16 | 2009-09-23 | 维斯欧法姆有限公司 | Feature-based registration of sectional images |
CN109886928A (en) * | 2019-01-24 | 2019-06-14 | 平安科技(深圳)有限公司 | A kind of target cell labeling method, device, storage medium and terminal device |
CN111462036A (en) * | 2020-02-18 | 2020-07-28 | 腾讯科技(深圳)有限公司 | Pathological image processing method based on deep learning, model training method and device |
JPWO2021038840A1 (en) * | 2019-08-30 | 2021-03-04 | ||
WO2021139258A1 (en) * | 2020-06-19 | 2021-07-15 | 平安科技(深圳)有限公司 | Image recognition based cell recognition and counting method and apparatus, and computer device |
CN114492648A (en) * | 2022-01-28 | 2022-05-13 | 腾讯科技(深圳)有限公司 | Object classification method, device, computer equipment, storage medium and program product |
CN114549896A (en) * | 2022-01-24 | 2022-05-27 | 清华大学 | Heterogeneous high-order representation method and device for full-view image for survival prediction |
CN114723652A (en) * | 2021-01-04 | 2022-07-08 | 富泰华工业(深圳)有限公司 | Cell density determination method, cell density determination device, electronic apparatus, and storage medium |
CN115035017A (en) * | 2021-03-04 | 2022-09-09 | 富泰华工业(深圳)有限公司 | Cell density grouping method, device, electronic apparatus and storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2754075A2 (en) * | 2011-09-09 | 2014-07-16 | Philip Morris Products S.a.s. | Systems and methods for network-based biological activity assessment |
-
2022
- 2022-12-26 CN CN202211674351.1A patent/CN115861275B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101542527A (en) * | 2006-11-16 | 2009-09-23 | 维斯欧法姆有限公司 | Feature-based registration of sectional images |
CN109886928A (en) * | 2019-01-24 | 2019-06-14 | 平安科技(深圳)有限公司 | A kind of target cell labeling method, device, storage medium and terminal device |
JPWO2021038840A1 (en) * | 2019-08-30 | 2021-03-04 | ||
CN111462036A (en) * | 2020-02-18 | 2020-07-28 | 腾讯科技(深圳)有限公司 | Pathological image processing method based on deep learning, model training method and device |
WO2021139258A1 (en) * | 2020-06-19 | 2021-07-15 | 平安科技(深圳)有限公司 | Image recognition based cell recognition and counting method and apparatus, and computer device |
CN114723652A (en) * | 2021-01-04 | 2022-07-08 | 富泰华工业(深圳)有限公司 | Cell density determination method, cell density determination device, electronic apparatus, and storage medium |
CN115035017A (en) * | 2021-03-04 | 2022-09-09 | 富泰华工业(深圳)有限公司 | Cell density grouping method, device, electronic apparatus and storage medium |
CN114549896A (en) * | 2022-01-24 | 2022-05-27 | 清华大学 | Heterogeneous high-order representation method and device for full-view image for survival prediction |
CN114492648A (en) * | 2022-01-28 | 2022-05-13 | 腾讯科技(深圳)有限公司 | Object classification method, device, computer equipment, storage medium and program product |
Non-Patent Citations (3)
Title |
---|
Yao Xue,et al..Cell Counting by Regression Using Convolutional Neural Network.《Computer Vision-ECCV 2016 Workshops》.2016,第9913卷全文. * |
上海东区水质净化厂尾水湿地池水质与蚊幼孳生分析;陆昕渝;肖冰;黄民生;何岩;李欣然;尹超;冷培恩;;《华东师范大学学报(自然科学版)》(第6期);全文 * |
全身炎症反应指数预测肝癌颈动脉化疗栓塞疗效的价值;胡超等;《临床放射学杂志》;第40卷(第7期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN115861275A (en) | 2023-03-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112183212B (en) | Weed identification method, device, terminal equipment and readable storage medium | |
CN114972339B (en) | Data enhancement system for bulldozer structural member production abnormity detection | |
CN115424053B (en) | Small sample image recognition method, device, equipment and storage medium | |
CN112183517B (en) | Card edge detection method, device and storage medium | |
CN115034315B (en) | Service processing method and device based on artificial intelligence, computer equipment and medium | |
CN114155397B (en) | Small sample image classification method and system | |
CN110826624A (en) | Time series classification method based on deep reinforcement learning | |
CN111539910B (en) | Rust area detection method and terminal equipment | |
CN113155784A (en) | Water transparency detection method, terminal device and storage medium | |
CN115861275B (en) | Cell counting method, cell counting device, terminal equipment and medium | |
CN111127407B (en) | Fourier transform-based style migration forged image detection device and method | |
CN109191452B (en) | Peritoneal transfer automatic marking method for abdominal cavity CT image based on active learning | |
CN114612919B (en) | Bill information processing system, method and device | |
CN116205918A (en) | Multi-mode fusion semiconductor detection method, device and medium based on graph convolution | |
CN118411568B (en) | Training of target recognition model, target recognition method, system, equipment and medium | |
CN111598184A (en) | DenseNet-based image noise identification method and device | |
CN112288748A (en) | Semantic segmentation network training and image semantic segmentation method and device | |
CN118314631B (en) | Concentration analysis method, device, equipment and storage medium based on sitting posture recognition | |
CN118015261B (en) | Remote sensing image target detection method based on multi-scale feature multiplexing | |
CN113505648B (en) | Pedestrian detection method, device, terminal equipment and storage medium | |
Ritter et al. | Automatic classification of chromosomes by means of quadratically asymmetric statistical distributions | |
CN118097724B (en) | Palm vein-based identity recognition method and device, readable storage medium and equipment | |
CN115251953B (en) | Motor imagery electroencephalogram signal identification method, device, terminal equipment and storage medium | |
CN111832427B (en) | EEG classification transfer learning method and system based on Euclidean alignment and Procrustes analysis | |
CN107808073B (en) | High-flux microorganism functional gene microarray processing method and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |