CN118512278B - AI modeling method and device used before tooth 3D printing - Google Patents

AI modeling method and device used before tooth 3D printing Download PDF

Info

Publication number
CN118512278B
CN118512278B CN202410980163.4A CN202410980163A CN118512278B CN 118512278 B CN118512278 B CN 118512278B CN 202410980163 A CN202410980163 A CN 202410980163A CN 118512278 B CN118512278 B CN 118512278B
Authority
CN
China
Prior art keywords
tooth
image
data
model
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410980163.4A
Other languages
Chinese (zh)
Other versions
CN118512278A (en
Inventor
李兵奇
康柱
郭卜源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Water Health Technology Wenzhou Co ltd
Original Assignee
Water Health Technology Wenzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Water Health Technology Wenzhou Co ltd filed Critical Water Health Technology Wenzhou Co ltd
Priority to CN202410980163.4A priority Critical patent/CN118512278B/en
Publication of CN118512278A publication Critical patent/CN118512278A/en
Application granted granted Critical
Publication of CN118512278B publication Critical patent/CN118512278B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C9/00Impression cups, i.e. impression trays; Impression methods
    • A61C9/004Means or methods for taking digitized impressions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • A61B6/51Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for dentistry
    • A61B6/512Intraoral means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/042Knowledge-based neural networks; Logical representations of neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Radiology & Medical Imaging (AREA)
  • Public Health (AREA)
  • Surgery (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Databases & Information Systems (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Epidemiology (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the technical field of 3D printing modeling, and particularly relates to an AI modeling method and device used before 3D printing of teeth, wherein a data scanning module acquires oral tooth image data under different modes by adopting equipment such as an oral scanner, a CBCT (computed tomography) instrument, an oral X-ray full-scope camera and the like; the multi-mode data fusion module fuses the tooth image data under different modes through a multi-mode tooth data fusion construction method to construct multi-mode fused tooth three-dimensional point cloud data; the intelligent tooth image segmentation module precisely segments the multi-mode fused tooth three-dimensional point cloud data according to teeth by a three-dimensional point cloud image intelligent segmentation method; the intelligent tooth recognition module extracts different tooth characteristics by using a tooth intelligent recognition method and gives out tooth types and relative position coordinates; the tooth model defect detection module is used for intelligently identifying whether the modeled three-dimensional model is defective or not and marking the defect position through a tooth model defect detection method.

Description

AI modeling method and device used before tooth 3D printing
Technical Field
The invention relates to the technical field of 3D printing modeling, in particular to an AI modeling method and device before 3D printing of teeth.
Background
The modeling technology research before 3D printing has a deep technical background, and the research significance is also reflected in a plurality of aspects, so that the modeling technology research has important value and effect in the aspects of promoting development of related fields, improving manufacturing efficiency, reducing cost, promoting technological progress, increasing artistic expression and entertainment experience and the like.
Currently, modeling techniques prior to 3D printing of teeth mainly employ 3D scanning techniques, by which the tooth morphology of the patient can be converted into a digital model, which is a prerequisite step for 3D printing of dental models. However, this technology currently has mainly the following problems:
1. Precision problem: although 3D scanning techniques are capable of capturing detailed morphologies of teeth, their accuracy may be affected by the quality of the scanning device, the skill level of the operator, and the patient's oral conditions (e.g., saliva, tongue position, etc.).
2. Data processing problems: the scanned data needs to be further processed and optimized to meet the requirement of 3D printing. This process may involve complex software operations, requiring a professional to perform.
In the prior art CN117618131a, a tooth modeling method and system are disclosed, which includes using a high-precision oral cavity three-dimensional scanning device to scan the oral cavity of a patient, and preprocessing the scanned data; using image processing technology to segment and identify teeth of the oral cavity model; the method is characterized in that the accuracy and naturalness of the model are improved and the modeling time is shortened by combining the pretreatment of original scanning data and the accurate segmentation and recognition of teeth and the matching and fusion of the standard tooth model library, but the method is limited to the traditional algorithm technology, has higher dependence on the standard tooth model library, limited modeling accuracy and weaker generalization capability.
Aiming at the problems, the invention provides an AI modeling method and device used before tooth 3D printing.
Disclosure of Invention
In order to solve the problems, the invention provides an AI modeling method and device for teeth before 3D printing, so as to solve the problems in the prior art and improve the accuracy and generalization capability of AI modeling of teeth before 3D printing.
In order to achieve the above purpose, the present invention provides the following technical solutions: the AI modeling method for the teeth before 3D printing is applied to an AI modeling device for the teeth before 3D printing, and comprises a tooth intelligent segmentation method, an intelligent tooth recognition method and a tooth model defect detection method.
Further, the AI modeling method for the teeth before 3D printing comprises a multi-mode tooth data fusion construction method and a three-dimensional point cloud image intelligent segmentation method, and each tooth in the tooth model obtained by scanning is segmented, so that feature extraction is convenient for different types of teeth.
Further, the AI modeling method for teeth before 3D printing specifically includes the steps of:
Step one: tooth data in different modes are obtained through oral cavity scanning, CBCT (Cone-Beam Computed Tomography, dental Cone beam computer tomography), oral cavity X-panorama, head shadow side position and dental photo direct shooting methods.
Step two: and constructing a Gaussian pyramid with reduced resolution gradient, and carrying out Gaussian blur and downsampling on each input original image of different modes for a plurality of times, so as to obtain a series of sub-images with reduced resolution gradient layer by layer.
Step three: constructing a Laplacian pyramid: and upsampling each layer of image in the Gaussian pyramid from the previous layer of image and subtracting the image to obtain the Laplacian pyramid, wherein the Laplacian pyramid contains detail information in each layer of image, and the layer number of the Laplacian pyramid is N.
Step four: a mask image is created that represents the locations where fusion is desired.
Step five: adding a plurality of Laplacian pyramids of images to be fused according to a mask image, wherein the mask image is used as a weight, the weights can be determined based on image quality, spatial frequency of corresponding layers or other prior information, a new pyramid is formed by the addition result, and a calculation formula is expressed as follows by taking fusion of two images as an example:
Wherein the method comprises the steps of In order for the image one to be fused,For a corresponding weight value of the image,In order for the two images to be fused,For the corresponding weight value of the second image,
Step six: the N+1 layers of the Gaussian pyramids of the two images to be fused are fused according to the following formula to obtain a fused image PIC1:
step seven: and (3) up-sampling the PIC1, adding the PIC1 with the top layer of the new pyramid to obtain PIC2, up-sampling the PIC2, adding the PIC2 with the next layer to obtain PIC3, and repeating the process until a final three-dimensional point cloud data fusion result containing full-angle information is obtained.
Further, the AI modeling method for the teeth before 3D printing comprises the following specific steps:
Step one: and removing invalid points and noise points in the point cloud data by adopting a Gaussian filtering algorithm so as to improve the quality of the point cloud data.
Step two: and sampling, and performing dimension reduction processing on the point cloud data to reduce the calculated amount.
Step three: normalizing, namely unifying the point cloud data into a specific coordinate system for subsequent processing.
Step four: and calculating the gradient amplitude and direction of each pixel point in the image.
Step five: non-maximum suppression suppresses non-edge pixels to 0, preserving edge pixels.
Step six: edge pixels are distinguished by setting a double threshold to distinguish between strong, weak and non-edge pixels.
Step seven: and D, dividing the data processed in the step six into training and testing data sets according to the proportion of 8:2.
Step eight: the design drawing convolutional neural network model architecture comprises an attention mechanism module, an edge convolutional layer and a gating iterative convolutional layer structure and is used for realizing three-dimensional point cloud data segmentation of the tooth model after the third processing step.
Step nine: inputting the feature vector of the point cloud midpoint into a convolution generating module, and then searching for a distance center point by using a k-NN algorithmThe nearest k points are calculated and their distances from the center point are calculatedAnd (3) representing.
Step ten: calculating by using edge convolution layer EdgeConv to obtain characteristic diagram matrixWherein, N represents the number of points in the point cloud, f represents the characteristic value of the convolution of the input graph, and the calculation formula of the edge convolution is as follows:
Wherein, Is a feature map matrixIs a constituent element of (1) representing a center pointDistance from its surrounding immediate pointMapping; a nonlinear activation function; And The graph corresponding to the respective positions is convolved.
Step eleven: extracting spatial information characteristics of three-dimensional point cloud data of tooth models by using a gating iterative convolution layer, wherein the input of the iterative convolution layer is as followsThe output isThe specific operation steps comprise:
Wherein, Feature graphs for participating in the calculation; Representing a linear projection of the input value; i represents the number of recursions of the current operation; n represents Is a total number of iterative operations; is a deep convolution operation; Representing a gating mechanism; Representing element multiplication.
Step twelve: and sending the output of the gating iterative convolution layer into a coordinate attention mechanism module, and improving the performance of the model by fully utilizing the remote dependency relationship among the elements.
Step thirteen: putting the graph convolution neural network model constructed in the eighth to twelfth steps on the training data set constructed in the seventh step to complete model training, deploying the trained neural network model on the test data set to complete testing, and respectively selecting mIoU (Mean Intersection over Union) evaluation indexes as test evaluation indexes, wherein a calculation formula is as follows:
where TP represents the number of pixels the model correctly predicts as a positive class, FP represents the number of pixels the model incorrectly predicts as a negative class, and FN represents the number of pixels the model incorrectly predicts as a negative class.
Further, the AI modeling method for teeth before 3D printing, the intelligent tooth recognition method based on the improvement YOLOv8, specifically comprises the following steps:
step one: and (3) correcting and correcting the data processed by the intelligent tooth segmentation method, and supplementing and correcting the undersegmented, oversegregated and missing data.
Step two: and scaling the tooth three-dimensional point cloud data image, and scaling the image data to 256 x 256 size so as to reduce the system overhead and improve the speed of model operation, learning and convergence.
Step three: correcting the position of the three-dimensional point cloud image data of the teeth, and rotating and overturning the image data with the angle deviation or the position deviation by taking the front face of the teeth as a reference and the vertical placement of the teeth, so that all the image data are positioned at the same reference.
Step four: and (3) carrying out standard on the three-dimensional point cloud image data of the processed teeth, respectively selecting different teeth by using marking frames with different colors, and marking the coordinate positions and names of the teeth by using the tooth center line as a reference.
Step five: because the neural network model is better at processing two-dimensional image data, the tooth three-dimensional point cloud image data is regarded as hexahedron, 6 projected two-dimensional sub-images of FIG1 to FIG6 are obtained according to a certain projection sequence in a plane projection mode, and the projected images are marked.
Step six: and (3) carrying out normalization processing on the two-dimensional projection image pixel obtained in the step (V), and scaling the pixel value of the image to be between the intervals of [0,1] so as to eliminate scale difference among different features, reduce the operation amount as much as possible, and accelerate the model training and convergence process.
Step seven: and deleting or integrating redundant data to prevent collision or inconsistency between the data.
Step eight: an improved YOLOv neural network model is constructed, and average pooling, maximum pooling and jump connection structures are introduced in a large amount on the basis of the YOLOv model so as to amplify necessary features, reduce the problem of shallow feature loss caused by deepening of model layers and promote model convergence.
Step nine: and D, dividing the data processed in the seventh step into a training set, a testing set and a verification set according to the proportion of 7:2:1, wherein the verification set is mainly used for assisting in judging the convergence condition of the model in the training process, and preventing the occurrence of over fitting or model performance degradation.
Step ten: training the neural network model constructed in the eighth step on the training data set constructed in the ninth step, testing performance on the testing data set, wherein the output of the neural network model is the tooth name and position coordinate corresponding to the image, and the testing evaluation index adopts Accuracy (Accuracy) and the calculation formula is as follows:
Wherein TP (True Positive) denotes the number of positive samples that are correctly identified as positive samples; TN (True Negative) denotes the number of negative samples that are correctly identified as negative samples; FP (False Positive) denotes the number of negative samples that were erroneously identified as positive samples; FN (False Negative) denotes the number of positive samples that are erroneously identified as negative samples.
Step eleven: step ten neural network models that passed the test were deployed into the device for tooth recognition.
The tooth model defect detection method is an important correction and guarantee method of the AI modeling method before the tooth 3D printing, and comprises the following specific steps:
Step one: constructing a model defect detection database, wherein data in the database adopts a mode of many-to-one, a plurality of lossy 3D modeling images correspond to a standard lossless image, the lossy image is used as a neural network input, a data marking frame marks the defect position, and the standard lossless image is used as a label.
Step two: the data in the database is divided into a training set and a test data set.
Step three: and constructing a neural network model.
Step four: and placing the neural network model on a training data set to complete model training, and then placing the trained neural network model on a test data set to complete model performance test, wherein the output of the neural network model is a corresponding defect position, and the defect position can be marked by a marking frame.
In another aspect, there is provided an AI modeling apparatus for 3D printing of teeth, applied to the AI modeling method for 3D printing of teeth of any one of the above, the AI modeling apparatus for 3D printing of teeth comprising:
And a data scanning module: the device consists of an oral scanner, a CBCT (computed tomography) instrument, an oral X-ray full-scope camera, a head shadow side shooting camera and a tooth direct shooting camera, and is used for acquiring oral tooth image data under different modes.
Multimode data fusion module: the built-in multi-mode tooth data fusion construction method is used for fusing the tooth image data under different modes obtained by the data scanning module to construct multi-mode fused tooth three-dimensional point cloud data.
Intelligent tooth image segmentation module: the built-in three-dimensional point cloud image intelligent segmentation method is used for segmenting multi-mode fused tooth three-dimensional point cloud data according to teeth and assisting subsequent intelligent tooth recognition and modeling.
Intelligent tooth recognition module: the intelligent recognition method for the built-in teeth effectively extracts different tooth characteristics, gives out tooth types and relative position coordinates, and assists in 3D printing modeling of the teeth.
And an intelligent modeling module: according to the tooth information features identified by the intelligent tooth identification module, the tooth model is customized by adopting a 3D printing mode according to different requirements and different clients by combining database priori knowledge.
Tooth model defect detection module: the built-in tooth model defect detection method is used for intelligently identifying whether the modeled three-dimensional model is defective or not, marking the defect position and guaranteeing the accuracy of 3D printing.
Compared with the prior art, the invention has the beneficial effects that:
1. According to the invention, the tooth state data under different states are obtained from different angles and directions through the multi-mode tooth data fusion construction method, and the two-dimensional plane data are fused to obtain accurate, comprehensive and three-dimensional point cloud data, so that the accuracy of AI modeling of the teeth before 3D printing is greatly improved.
2. According to the invention, accurate segmentation of teeth in the oral cavity is realized by the intelligent segmentation method of the three-dimensional point cloud image, so that accurate differentiation of different tooth forms and states of different clients is realized, and a foundation is laid for realizing personalized customization of user requirements.
3. According to the tooth intelligent identification method, different teeth and relative coordinate positions thereof are accurately identified, so that the accuracy and reliability of tooth 3D printing modeling are guaranteed, and meanwhile, the tooth intelligent identification method has strong generalization capability.
4. According to the method, the possible fine defects of the model can be accurately detected in the modeling stage through the tooth model defect detection method, the defect positions can be accurately marked, effective guarantee is provided for the integrity and the high efficiency of the subsequent 3D printing, meanwhile, the printing error rate is reduced, and the printing cost is saved.
Drawings
FIG. 1 is a flow chart of an AI modeling method for teeth prior to 3D printing in accordance with the present invention;
FIG. 2 is a block diagram of an AI modeling apparatus for use with teeth prior to 3D printing in accordance with the present invention;
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In the description of the present application, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more of the described features. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In the description of the present application, the term "for example" is used to mean "serving as an example, instance, or illustration. Any embodiment described as "for example" in this disclosure is not necessarily to be construed as preferred or advantageous over other embodiments. The following description is presented to enable any person skilled in the art to make and use the application. In the following description, details are set forth for purposes of explanation. It will be apparent to one of ordinary skill in the art that the present application may be practiced without these specific details. In other instances, well-known structures and processes have not been described in detail so as not to obscure the description of the application with unnecessary detail. Thus, the present application is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
In the embodiment of the invention, an AI modeling method used before 3D printing of teeth can be specifically seen in fig. 1, and the method is applied to an AI modeling device used before 3D printing of teeth, and comprises a tooth intelligent segmentation method, an intelligent tooth recognition method and a tooth model defect detection method.
The tooth intelligent segmentation method comprises a multi-mode tooth data fusion construction method and a three-dimensional point cloud image intelligent segmentation method, wherein the multi-mode tooth data fusion construction method fuses two-dimensional tooth images acquired under different angles, sources and modes to form three-dimensional multi-dimensional tooth point cloud fusion data, so that characteristic information of the acquired image data under different modes is reserved to the greatest extent; the three-dimensional point cloud image intelligent segmentation method segments each tooth in the tooth model fused by scanning, so that feature extraction and recognition can be conveniently carried out on different types of teeth.
The intelligent tooth recognition method learns and extracts the characteristic information of different kinds of individual teeth through the intelligent tooth recognition method based on the neural network, can accurately recognize and distinguish the characteristics and the relative coordinate positions of different teeth, and is convenient for accurately modeling the appearance characteristics and the position information of different teeth.
According to the tooth model defect detection method, through the intelligent tooth model defect detection method based on the neural network, defect detection of the electronic tooth model obtained through modeling is achieved in the modeling stage, the defect position is accurately marked, and correction and adjustment of modeling are assisted.
Referring to fig. 2, the present invention also provides an AI modeling apparatus for 3D printing of teeth, applied to any one of the AI modeling methods for 3D printing of teeth, the AI modeling apparatus for 3D printing of teeth comprising:
And a data scanning module: the device consists of an oral scanner, a CBCT (computed tomography) instrument, an oral X-ray full-scope camera, a head shadow side shooting camera and a tooth direct shooting camera, and is used for acquiring oral tooth image data under different modes.
Multimode data fusion module: the built-in multi-mode tooth data fusion construction method is used for fusing the tooth image data under different modes obtained by the data scanning module to construct multi-mode fused tooth three-dimensional point cloud data.
Intelligent tooth image segmentation module: the built-in three-dimensional point cloud image intelligent segmentation method is used for segmenting the multi-mode fused tooth three-dimensional point cloud data according to teeth and assisting in subsequent intelligent tooth recognition and modeling.
Intelligent tooth recognition module: the intelligent recognition method for the built-in teeth effectively extracts different tooth characteristics, gives out tooth types and relative position coordinates, and assists in 3D printing modeling of the teeth.
And an intelligent modeling module: according to the tooth information features identified by the intelligent tooth identification module, the tooth model is customized by adopting a 3D printing mode according to different requirements and different clients by combining database priori knowledge.
Tooth model defect detection module: the built-in tooth model defect detection method is used for intelligently identifying whether the modeled three-dimensional model is defective or not, marking the defect position and guaranteeing the accuracy of 3D printing.
In a specific embodiment, a tooth 3D printing technology operator acquires tooth image data under different modes through various data acquisition instruments and devices installed on a data scanning module, then a multi-mode tooth data fusion construction method built in a multi-mode data fusion module fuses and constructs two-dimensional tooth image data under different modes into three-dimensional point cloud tooth image data, and then a three-dimensional point cloud image intelligent segmentation method built in an intelligent tooth image segmentation module is used for segmenting the three-dimensional point cloud data of the multi-mode fused tooth, so that different teeth are accurately segmented, and boundaries of the different teeth are clearly marked. Then, technicians accurately identify different tooth types, appearance data features and relative positions thereof through a tooth intelligent identification method built in the intelligent tooth identification module, and effective guidance is provided for 3D printing. After 3D digital modeling, intelligently identifying whether the modeled three-dimensional model has defects or not by a tooth model defect detection method built in a tooth model defect detection module, marking out the defect positions, guiding technicians to correct the defects in time, and guaranteeing the accuracy of 3D printing.
Example 1
In one embodiment, a technician divides each tooth in the scanned tooth model through a tooth intelligent dividing method, so that feature extraction is convenient for different types of teeth, the tooth intelligent dividing method comprises a multi-mode tooth data fusion construction method and a three-dimensional point cloud image intelligent dividing method, and the multi-mode tooth data fusion construction method comprises the following specific steps:
Step one: tooth data in different modes are obtained through oral cavity scanning, CBCT (Cone-Beam Computed Tomography, dental Cone beam computer tomography), oral cavity X-panorama, head shadow side position and dental photo direct shooting methods. Step two: and constructing a Gaussian pyramid with reduced resolution gradient, and carrying out Gaussian blur and downsampling on each input original image of different modes for a plurality of times, so as to obtain a series of sub-images with reduced resolution gradient layer by layer.
Step three: constructing a Laplacian pyramid: and upsampling each layer of image in the Gaussian pyramid from the previous layer of image and subtracting the image to obtain the Laplacian pyramid, wherein the Laplacian pyramid contains detail information in each layer of image, and the layer number of the Laplacian pyramid is N.
Step four: a mask image is created that represents the locations where fusion is desired.
Step five: adding a plurality of Laplacian pyramids of images to be fused according to a mask image, wherein the mask image is used as a weight, the weights can be determined based on image quality, spatial frequency of corresponding layers or other prior information, a new pyramid is formed by the addition result, and a calculation formula is expressed as follows by taking fusion of two images as an example:
Wherein the method comprises the steps of In order for the image one to be fused,For a corresponding weight value of the image,In order for the two images to be fused,For the corresponding weight value of the second image,
Step six: the N+1 layers of the Gaussian pyramids of the two images to be fused are fused according to the following formula to obtain a fused image PIC1:
Wherein, For the image obtained by the fusion of the i-th group,For the left-hand image of the i-th group,For the left-hand image weight,Is the i-th right image.
Step seven: and (3) up-sampling the PIC1, adding the PIC1 with the top layer of the new pyramid to obtain PIC2, up-sampling the PIC2, adding the PIC2 with the next layer to obtain PIC3, and repeating the process until a final three-dimensional point cloud data fusion result containing full-angle information is obtained.
Optionally, the above-mentioned tooth data acquisition modes under different modes can be combined by arbitrarily selecting different data acquisition modes according to the equipment configuration in the actual scene or different tooth model uses and model-making emphasis points, for example, selecting the data sources of the mouth scan and head shadow side, so that the high-resolution geometrical characteristics of the dental crown, the soft and hard tissue structures and the relative positions can be obtained.
According to the multi-mode tooth data fusion construction method, two-dimensional plane tooth state images acquired in different data source modes can be fused to form a three-dimensional point cloud image, the shape or structure of the three-dimensional tooth model is displayed in a three-dimensional mode through related software and a display screen on a computer or a server, and tooth information of different angles can be displayed through three-dimensional rotation, so that a doctor can watch the tooth information conveniently.
The intelligent three-dimensional point cloud image segmentation method comprises the following specific steps:
Step one: and removing invalid points and noise points in the point cloud data by adopting a Gaussian filtering algorithm so as to improve the quality of the point cloud data.
Step two: and sampling, and performing dimension reduction processing on the point cloud data to reduce the calculated amount.
Step three: normalizing, namely unifying the point cloud data into a specific coordinate system for subsequent processing.
Step four: and calculating the gradient amplitude and direction of each pixel point in the image.
Step five: non-maximum suppression suppresses non-edge pixels to 0, preserving edge pixels.
Step six: edge pixels are distinguished by setting a double threshold to distinguish between strong, weak and non-edge pixels.
Step seven: and D, dividing the data processed in the step six into training and testing data sets according to the proportion of 8:2.
Step eight: the design drawing convolutional neural network model architecture comprises an attention mechanism module, an edge convolutional layer and a gating iterative convolutional layer structure and is used for realizing three-dimensional point cloud data segmentation of the tooth model after the third processing step.
Step nine: inputting the feature vector of the point cloud midpoint into a convolution generating module, and then searching for a distance center point by using a k-NN algorithmThe nearest k points are calculated and their distances from the center point are calculatedAnd (3) representing.
Step ten: calculating by using edge convolution layer EdgeConv to obtain characteristic diagram matrixWherein, N represents the number of points in the point cloud, f represents the characteristic value of the convolution of the input graph, and the calculation formula of the edge convolution is as follows:
Wherein, Is a feature map matrixIs a constituent element of (1) representing a center pointDistance from its surrounding immediate pointMapping; a nonlinear activation function; And The graph corresponding to the respective positions is convolved.
Step eleven: extracting spatial information characteristics of three-dimensional point cloud data of tooth models by using a gating iterative convolution layer, wherein the input of the iterative convolution layer is as followsThe output isThe specific operation steps comprise:
Wherein, Feature graphs for participating in the calculation; Representing a linear projection of the input value; i represents the number of recursions of the current operation; n represents Is a total number of iterative operations; is a deep convolution operation; Representing a gating mechanism; Representing element multiplication.
Step twelve: and sending the output of the gating iterative convolution layer into a coordinate attention mechanism module, and improving the performance of the model by fully utilizing the remote dependency relationship among the elements.
Step thirteen: putting the graph convolution neural network model constructed in the eighth to twelfth steps on the training data set constructed in the seventh step to complete model training, deploying the trained neural network model on the test data set to complete testing, and respectively selecting mIoU (Mean Intersection over Union) evaluation indexes as test evaluation indexes, wherein a calculation formula is as follows:
where TP represents the number of pixels the model correctly predicts as a positive class, FP represents the number of pixels the model incorrectly predicts as a negative class, and FN represents the number of pixels the model incorrectly predicts as a negative class.
The Gaussian filtering algorithm is used for preprocessing the acquired three-dimensional point cloud data, mainly removing deviation points and noise points generated in the image synthesis process and interference of other interference substances in the oral cavity on the three-dimensional point cloud data in the scanning process, facilitating searching of boundaries among different teeth, and effectively improving the accuracy of point cloud data segmentation.
The tooth shape, the pride, the position and the structure of individuals of different crowds have great difference, and it is difficult to apply a unified tooth model to all individuals, the three-dimensional point cloud tooth data segmentation method based on the graph neural network can effectively extract and learn the characteristics of boundary gaps between teeth under the restraint of a large amount of data and calculation force and a loss function, therefore, different teeth and boundaries thereof are correctly identified, the method has stronger generalization capability and flexibility, and even if different teeth shapes and structures face different users, the teeth and the boundaries can be correctly distinguished through learned prior knowledge, so that high-precision tooth segmentation is realized.
Example 2
In one embodiment, to ensure modeling accuracy of teeth before 3D printing, technicians use intelligent recognition methods to accurately identify tooth types, appearance features, and relative coordinate positions. The intelligent tooth recognition method is an intelligent tooth recognition method based on an improvement YOLOv, and comprises the following specific steps:
step one: and (3) correcting and correcting the data processed by the intelligent tooth segmentation method, and supplementing and correcting the undersegmented, oversegregated and missing data.
Step two: and scaling the tooth three-dimensional point cloud data image, and scaling the image data to 256 x 256 size so as to reduce the system overhead and improve the speed of model operation, learning and convergence.
Step three: correcting the position of the three-dimensional point cloud image data of the teeth, and rotating and overturning the image data with the angle deviation or the position deviation by taking the front face of the teeth as a reference and the vertical placement of the teeth, so that all the image data are positioned at the same reference.
Step four: and (3) carrying out standard on the three-dimensional point cloud image data of the processed teeth, respectively selecting different teeth by using marking frames with different colors, and marking the coordinate positions and names of the teeth by using the tooth center line as a reference.
Step five: because the neural network model is better at processing two-dimensional image data, the tooth three-dimensional point cloud image data is regarded as hexahedron, 6 projected two-dimensional sub-images of FIG1 to FIG6 are obtained according to a certain projection sequence in a plane projection mode, and the projected images are marked.
Step six: and (3) carrying out normalization processing on the two-dimensional projection image pixel obtained in the step (V), and scaling the pixel value of the image to be between the intervals of [0,1] so as to eliminate scale difference among different features, reduce the operation amount as much as possible, and accelerate the model training and convergence process.
Step seven: and deleting or integrating redundant data to prevent collision or inconsistency between the data.
Step eight: an improved YOLOv neural network model is constructed, and average pooling, maximum pooling and jump connection structures are introduced in a large amount on the basis of the YOLOv model so as to amplify necessary features, reduce the problem of shallow feature loss caused by deepening of model layers and promote model convergence.
Step nine: and D, dividing the data processed in the seventh step into a training set, a testing set and a verification set according to the proportion of 7:2:1, wherein the verification set is mainly used for assisting in judging the convergence condition of the model in the training process, and preventing the occurrence of over fitting or model performance degradation.
Step ten: training the neural network model constructed in the eighth step on the training data set constructed in the ninth step, testing performance on the testing data set, wherein the output of the neural network model is the tooth name and position coordinate corresponding to the image, and the testing evaluation index adopts Accuracy (Accuracy) and the calculation formula is as follows:
Wherein TP (True Positive) denotes the number of positive samples that are correctly identified as positive samples; TN (True Negative) denotes the number of negative samples that are correctly identified as negative samples; FP (False Positive) denotes the number of negative samples that were erroneously identified as positive samples; FN (False Negative) denotes the number of positive samples that are erroneously identified as negative samples.
Step eleven: step ten neural network models that passed the test were deployed into the device for tooth recognition.
Because of the large difference of tooth states among different individuals, the phenomenon of inaccurate segmentation of the tooth three-dimensional point cloud image after the three-dimensional tooth point cloud data segmentation is unavoidable, and the operations of checking, correcting, scaling and position correction on the data processed by the intelligent tooth segmentation method can reduce the system overhead, improve the speed of model operation, learning and convergence, enable all image data to be in the same standard, and improve the recognition accuracy.
According to the image two-dimensional projection method, the three-dimensional point cloud data are vertically projected in six different directions to obtain six different two-dimensional sub-images, the teeth are marked on the projected images in the different directions, the structure and appearance characteristic information of the teeth is fully utilized, and enough learnable hidden priori characteristics are provided for the neural network model, so that the neural network model can learn and utilize the structure, appearance and position characteristic information of different teeth better.
According to the intelligent tooth recognition method based on the improved YOLOv model, the function of positioning the recognized real object is added on the basis of an original image recognition task, the output of the network not only comprises the recognized tooth category, but also comprises the relative coordinate position of the tooth in the oral cavity, the coordinate system is a two-dimensional coordinate system, the upper right side of the center line is a first quadrant, the upper left side is a second quadrant, the lower left side is a third quadrant, the lower right side is a fourth quadrant, and each tooth comprises two coordinates, so that accurate tooth positioning is realized, and 3D printing modeling of the tooth is assisted.
Example 3
In one embodiment, a technician adopts a tooth model defect detection method to realize defect detection and effective correction of an AI modeling method before 3D printing of teeth, thereby improving the accuracy of 3D printing and effectively saving the 3D printing cost, and the specific steps comprise:
Step one: constructing a model defect detection database, wherein data in the database adopts a mode of many-to-one, a plurality of lossy 3D modeling images correspond to a standard lossless image, the lossy image is used as a neural network input, a data marking frame marks the defect position, and the standard lossless image is used as a label.
Step two: the data in the database is divided into a training set and a test data set.
Step three: and constructing a neural network model.
Step four: and placing the neural network model on a training data set to complete model training, and then placing the trained neural network model on a test data set to complete model performance test, wherein the output of the neural network model is a corresponding defect position, and the defect position can be marked by a marking frame.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (5)

1. The AI modeling method for the teeth before 3D printing comprises a tooth intelligent segmentation method, an intelligent tooth recognition method and a tooth model defect detection method, and is characterized in that the tooth intelligent segmentation method segments each tooth in a tooth model obtained by scanning, so that feature extraction is convenient for different types of teeth; the intelligent tooth recognition method learns and extracts the characteristic information of different kinds of individual teeth through the intelligent tooth recognition method based on the neural network, and can accurately recognize and distinguish the characteristics and the relative coordinate positions of different teeth; the tooth model defect detection method is used for realizing the defect detection of the electronic tooth model obtained by modeling in the modeling stage by an intelligent tooth model defect detection method based on a neural network, accurately marking the defect position and assisting in correcting and adjusting the modeling;
The tooth intelligent segmentation method comprises a multi-mode tooth data fusion construction method and a three-dimensional point cloud image intelligent segmentation method, wherein the multi-mode tooth data fusion construction method fuses two-dimensional tooth images acquired under different angles, sources and modes to form three-dimensional multi-dimensional tooth point cloud fusion data, so that characteristic information of the acquired image data under different modes is reserved to the greatest extent; the three-dimensional point cloud image intelligent segmentation method segments each tooth in the tooth model fused by sweeping, so that feature extraction and recognition are convenient for different types of teeth;
the multi-mode tooth data fusion construction method specifically comprises the following steps:
Step one: obtaining tooth data under different modes by oral scanning, CBCT, oral X panorama, head shadow side position and tooth photo direct shooting methods;
step two: constructing a Gaussian pyramid with reduced resolution gradient, and carrying out Gaussian blur and downsampling on each input original image of different modes for a plurality of times so as to obtain a series of sub-images with reduced resolution gradient layer by layer;
step three: constructing a Laplacian pyramid: the method comprises the steps that each layer of image in a Gaussian pyramid is up-sampled from the upper layer of image of the Gaussian pyramid, and the Laplacian pyramid is obtained through subtraction, wherein the Laplacian pyramid contains detail information in each layer of image, and the number of layers of the Laplacian pyramid is N;
step four: creating a mask image, wherein the mask image represents a position to be fused;
Step five: adding a plurality of Laplacian pyramids of images to be fused according to a mask image, wherein the mask image is used as a weight, the weights can be determined based on image quality, spatial frequency of corresponding layers or other prior information, a new pyramid is formed by the addition result, and a calculation formula is expressed as follows by taking fusion of two images as an example:
Wherein the method comprises the steps of In order for the image one to be fused,For a corresponding weight value of the image,In order for the two images to be fused,For the corresponding weight value of the second image,
Step six: the N+1 layers of the Gaussian pyramids of the two images to be fused are fused according to the following formula to obtain a fused image PIC1:
Wherein, For the image obtained by the fusion of the i-th group,For the left-hand image of the i-th group,For the left-hand image weight,Right side image for the i-th group;
Step seven: the PIC1 is up-sampled and then added with the top layer of the new pyramid to obtain PIC2, then the PIC2 is up-sampled and added with the next layer to obtain PIC3, and the process is repeated until a final three-dimensional point cloud data fusion result containing all-angle information is obtained;
The intelligent segmentation method for the three-dimensional point cloud image comprises the following specific steps:
step one: removing invalid points and noise points in the point cloud data by adopting a Gaussian filtering algorithm so as to improve the quality of the point cloud data;
Step two: sampling, and performing dimension reduction processing on the point cloud data to reduce the calculated amount;
step three: normalizing, namely unifying the point cloud data to a specific coordinate system;
step four: calculating the gradient amplitude and direction of each pixel point in the image;
Step five: non-maximum suppression, suppressing non-edge pixels to 0, and reserving edge pixels;
Step six: judging edge pixels by setting double thresholds, and distinguishing strong edges, weak edges and non-edge pixels;
Step seven: dividing the data processed in the step six into training and testing data sets according to the proportion of 8:2;
Step eight: the design diagram convolutional neural network model architecture comprises an attention mechanism module, an edge convolutional layer and a gating iterative convolutional layer structure, and is used for realizing three-dimensional point cloud data segmentation of the tooth model after the third processing step;
step nine: inputting the feature vector of the point cloud midpoint into a convolution generating module, and then searching for a distance center point by using a k-NN algorithm The nearest k points are calculated and their distances from the center point are calculatedA representation;
Step ten: calculating by using edge convolution layer EdgeConv to obtain characteristic diagram matrix Wherein, N represents the number of points in the point cloud, f represents the characteristic value of the convolution of the input graph, and the calculation formula of the edge convolution is as follows:
Wherein, Is a feature map matrixIs a constituent element of (1) representing a center pointDistance from its surrounding immediate pointMapping; a nonlinear activation function; And A graph convolution kernel corresponding to each position;
Step eleven: extracting spatial information characteristics of three-dimensional point cloud data of tooth models by using a gating iterative convolution layer, wherein the input of the iterative convolution layer is as follows The output isThe specific operation steps comprise:
Wherein, Feature graphs for participating in the calculation; Representing a linear projection of the input value; i represents the number of recursions of the current operation; n represents Is a total number of iterative operations; is a deep convolution operation; Representing a gating mechanism; representing element multiplication;
step twelve: the output of the gating iterative convolution layer is sent to a coordinate attention mechanism module, and the performance of the model is improved by fully utilizing the remote dependency relationship among elements;
Step thirteen: putting the graph convolution neural network model constructed in the eighth to twelfth steps on the training data set constructed in the seventh step to complete model training, deploying the trained neural network model on the test data set to complete testing, and respectively selecting mIoU evaluation indexes as test evaluation indexes, wherein a calculation formula is as follows:
Wherein TP represents the number of pixels of the model that correctly predicts the positive class, FP represents the number of pixels of the model that incorrectly predicts the negative class as the positive class, and FN represents the number of pixels of the model that incorrectly predicts the positive class as the negative class;
the intelligent tooth recognition method is an intelligent tooth recognition method based on improvement YOLOv, and comprises the following specific steps:
Step one: the data processed by the intelligent tooth segmentation method is corrected and revised, and the under-segmentation, over-segmentation and missing data are supplemented and revised;
step two: scaling the tooth three-dimensional point cloud data image, and scaling the image data to 256 x 256 size so as to reduce the system overhead and improve the speed of model operation, learning and convergence;
Step three: correcting the position of the three-dimensional point cloud image data of the teeth, and rotating and overturning the image data with the angle deviation or the position deviation by taking the front face of the teeth as a reference and the vertical placement of the teeth to ensure that all the image data are positioned at the same reference;
step four: the method comprises the steps of performing standard on three-dimensional point cloud image data of the processed teeth, respectively selecting different teeth by using marking frames with different colors, and marking the coordinate positions and names of the teeth by using the tooth center line as a reference;
step five: because the neural network model is better in processing two-dimensional image data, the tooth three-dimensional point cloud image data is regarded as hexahedron, 6 projected two-dimensional sub-images of FIG1 to FIG6 are obtained according to a certain projection sequence in a plane projection mode, and the projected images are marked;
Step six: carrying out normalization processing on the two-dimensional projection image pixel obtained in the step five, scaling the pixel value of the image to be between the intervals of [0,1] so as to eliminate scale difference among different features, simultaneously reducing the operation amount as much as possible, and accelerating the model training and convergence process;
Step seven: deleting or integrating redundant data to prevent collision or inconsistency between the data;
Step eight: an improved YOLOv neural network model is constructed, and an average pooling, maximum pooling and jump connection structure is introduced in a large amount on the basis of the YOLOv model so as to amplify necessary characteristics, reduce the problem of shallow characteristic loss caused by deepening the model layer number and promote model convergence;
Step nine: dividing the data processed in the seventh step into a training set, a testing set and a verification set according to the proportion of 7:2:1, wherein the verification set is mainly used for assisting in judging the convergence condition of the model in the training process, and preventing the occurrence of overfitting or model performance degradation;
Step ten: training the neural network model constructed in the eighth step on the training data set constructed in the ninth step, testing performance on the testing data set, wherein the output of the neural network model is the tooth name and position coordinate corresponding to the image, and the testing evaluation index adopts Accuracy (Accuracy) and the calculation formula is as follows:
Where TP represents the number of positive samples that are correctly identified as positive samples; TN represents the number of negative samples that are correctly identified as negative; FP represents the number of negative samples that were incorrectly identified as positive samples; FN represents the number of positive samples that are erroneously identified as negative samples;
step eleven: deploying the ten tested neural network models into a device for tooth recognition;
the tooth model defect detection method specifically comprises the following steps:
Step one: constructing a model defect detection database, wherein data in the database adopts a mode of many-to-one, a plurality of lossy 3D modeling images correspond to a standard lossless image, the lossy image is used as a neural network input, a data marking frame marks the defect position, and the standard lossless image is used as a label;
step two: dividing data in a database into a training set and a test data set;
step three: constructing a neural network model;
step four: and placing the neural network model on a training data set to complete model training, and then placing the trained neural network model on a test data set to complete model performance test, wherein the output of the neural network model is a corresponding defect position, and the defect position can be marked by a marking frame.
2. An AI modeling apparatus for 3D printing of teeth, characterized by being applied to an AI modeling method for 3D printing of teeth as set forth in claim 1, the apparatus comprising:
the data scanning module consists of an oral cavity scanner, a CBCT (computed tomography) instrument, an oral cavity X-ray full-scope camera, a head shadow side shooting camera and a tooth direct shooting camera and is used for acquiring oral cavity tooth image data under different modes.
3. The AI modeling apparatus for use before 3D printing of a tooth of claim 2, wherein the apparatus comprises:
Multimode data fusion module: the built-in multi-mode tooth data fusion construction method is used for fusing the tooth image data under different modes obtained by the data scanning module to construct multi-mode fused tooth three-dimensional point cloud data;
intelligent tooth image segmentation module: the built-in three-dimensional point cloud image intelligent segmentation method is used for segmenting multi-mode fused tooth three-dimensional point cloud data according to teeth and assisting subsequent intelligent tooth recognition and modeling.
4. The AI modeling apparatus for use before 3D printing of a tooth of claim 2, wherein the apparatus comprises:
intelligent tooth recognition module: the built-in intelligent tooth recognition method effectively extracts different tooth characteristics, gives out tooth types and relative position coordinates, and assists in 3D printing modeling of the teeth;
And an intelligent modeling module: according to the tooth information features identified by the intelligent tooth identification module, the tooth model is customized by adopting a 3D printing mode according to different requirements and different clients by combining database priori knowledge.
5. The AI modeling apparatus for use before 3D printing of a tooth of claim 2, wherein the apparatus comprises:
Tooth model defect detection module: the built-in tooth model defect detection method is used for intelligently identifying whether the modeled three-dimensional model is defective or not, marking the defect position and guaranteeing the accuracy of 3D printing.
CN202410980163.4A 2024-07-22 2024-07-22 AI modeling method and device used before tooth 3D printing Active CN118512278B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410980163.4A CN118512278B (en) 2024-07-22 2024-07-22 AI modeling method and device used before tooth 3D printing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410980163.4A CN118512278B (en) 2024-07-22 2024-07-22 AI modeling method and device used before tooth 3D printing

Publications (2)

Publication Number Publication Date
CN118512278A CN118512278A (en) 2024-08-20
CN118512278B true CN118512278B (en) 2024-10-29

Family

ID=92284294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410980163.4A Active CN118512278B (en) 2024-07-22 2024-07-22 AI modeling method and device used before tooth 3D printing

Country Status (1)

Country Link
CN (1) CN118512278B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118709579B (en) * 2024-08-29 2024-11-05 浙江毫微米科技有限公司 3D printing method, system, device, medium, and program in virtual environment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116421341A (en) * 2023-04-17 2023-07-14 上海伊姆特医疗科技有限公司 Orthognathic surgery planning method, orthognathic surgery planning equipment, orthognathic surgery planning storage medium and orthognathic surgery navigation system
CN117876578A (en) * 2023-12-15 2024-04-12 北京大学口腔医学院 Orthodontic tooth arrangement method based on crown root fusion

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
PL3595574T3 (en) * 2017-03-17 2024-01-03 Nobel Biocare Services Ag Automatic dental arch mapping system and method
EP3673863A1 (en) * 2018-12-28 2020-07-01 Trophy 3d printing optimization using clinical indications
EP3673864A1 (en) * 2018-12-28 2020-07-01 Trophy Tooth segmentation using tooth registration
CN110363750B (en) * 2019-06-28 2023-05-09 福建师范大学 Automatic extraction method for root canal morphology based on multi-mode data fusion
WO2021155230A1 (en) * 2020-01-31 2021-08-05 James R. Glidewell Dental Ceramics, Inc. Teeth segmentation using neural networks
CN113516784B (en) * 2021-07-27 2023-05-23 四川九洲电器集团有限责任公司 Tooth segmentation modeling method and device
CN114880924A (en) * 2022-04-22 2022-08-09 东莞中科云计算研究院 Deep learning-based automatic design method and system for dental prosthesis
CN116129112A (en) * 2022-12-28 2023-05-16 深圳市人工智能与机器人研究院 Oral cavity three-dimensional point cloud segmentation method of nucleic acid detection robot and robot
CN115953583B (en) * 2023-03-15 2023-06-20 山东大学 Tooth segmentation method and system based on iterative boundary optimization and deep learning
CN117437477A (en) * 2023-11-03 2024-01-23 燕山大学 Defect detection method of 3D printing dot matrix structure

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116421341A (en) * 2023-04-17 2023-07-14 上海伊姆特医疗科技有限公司 Orthognathic surgery planning method, orthognathic surgery planning equipment, orthognathic surgery planning storage medium and orthognathic surgery navigation system
CN117876578A (en) * 2023-12-15 2024-04-12 北京大学口腔医学院 Orthodontic tooth arrangement method based on crown root fusion

Also Published As

Publication number Publication date
CN118512278A (en) 2024-08-20

Similar Documents

Publication Publication Date Title
US20220218449A1 (en) Dental cad automation using deep learning
US11961238B2 (en) Tooth segmentation using tooth registration
US9418474B2 (en) Three-dimensional model refinement
CN111415419B (en) Method and system for making tooth restoration model based on multi-source image
CN114730466A (en) Automatic detection, generation and/or correction of tooth features in digital models
CN107909622B (en) Model generation method, medical imaging scanning planning method and medical imaging system
CN114746952A (en) Method, system and computer-readable storage medium for creating a three-dimensional dental restoration from a two-dimensional sketch
CN107427189A (en) Intraoral image automatically selecting and locking
CN110998602A (en) Classification and 3D modeling of 3D dento-maxillofacial structures using deep learning methods
CN114424246A (en) Method, system and computer-readable storage medium for registering intraoral measurements
CN115205469A (en) Tooth and alveolar bone reconstruction method, equipment and medium based on CBCT
CN118512278B (en) AI modeling method and device used before tooth 3D printing
CN117671138A (en) Digital twin modeling method and system based on SAM large model and NeRF
US20220378548A1 (en) Method for generating a dental image
CN117152507B (en) Tooth health state detection method, device, equipment and storage medium
CN113344950A (en) CBCT image tooth segmentation method combining deep learning with point cloud semantics
CN112686202A (en) Human head identification method and system based on 3D reconstruction
CN116725563B (en) Eyeball salience measuring device
CN116975779A (en) Neural network-based oral cavity full-scene feature recognition method, system and terminal
CN117218192A (en) Weak texture object pose estimation method based on deep learning and synthetic data
CN116188513A (en) CBCT (Cone-based computed tomography) tooth example segmentation method combining global attention and scale perception
CN116797828A (en) Method and device for processing oral full-view film and readable storage medium
CN113822904B (en) Image labeling device, method and readable storage medium
CN116797731A (en) Artificial intelligence-based oral cavity CBCT image section generation method
CN116974369B (en) Method, system, equipment and storage medium for operating medical image in operation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant