FI130746B1 - Determining tooth color shade based on image obtained with a mobile device - Google Patents
Determining tooth color shade based on image obtained with a mobile device Download PDFInfo
- Publication number
- FI130746B1 FI130746B1 FI20195916A FI20195916A FI130746B1 FI 130746 B1 FI130746 B1 FI 130746B1 FI 20195916 A FI20195916 A FI 20195916A FI 20195916 A FI20195916 A FI 20195916A FI 130746 B1 FI130746 B1 FI 130746B1
- Authority
- FI
- Finland
- Prior art keywords
- image
- tooth
- color
- training
- embeddings
- Prior art date
Links
- 238000012549 training Methods 0.000 claims abstract description 247
- 238000000034 method Methods 0.000 claims abstract description 85
- 238000010295 mobile communication Methods 0.000 claims abstract description 28
- 238000012545 processing Methods 0.000 claims abstract description 23
- 238000003064 k means clustering Methods 0.000 claims abstract description 13
- 238000004590 computer program Methods 0.000 claims abstract description 6
- 239000011159 matrix material Substances 0.000 claims description 44
- 238000004519 manufacturing process Methods 0.000 claims description 37
- 230000005856 abnormality Effects 0.000 claims description 14
- 230000008859 change Effects 0.000 claims description 9
- 238000007781 pre-processing Methods 0.000 claims description 5
- 239000003086 colorant Substances 0.000 abstract description 26
- 238000004422 calculation algorithm Methods 0.000 description 26
- 230000008569 process Effects 0.000 description 20
- 238000012360 testing method Methods 0.000 description 11
- 238000004458 analytical method Methods 0.000 description 5
- 238000012706 support-vector machine Methods 0.000 description 5
- 238000013473 artificial intelligence Methods 0.000 description 4
- 210000003298 dental enamel Anatomy 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000002372 labelling Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 208000002925 dental caries Diseases 0.000 description 2
- 230000037213 diet Effects 0.000 description 2
- 235000005911 diet Nutrition 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000002087 whitening effect Effects 0.000 description 2
- 208000018035 Dental disease Diseases 0.000 description 1
- 208000014151 Stomatognathic disease Diseases 0.000 description 1
- 208000010641 Tooth disease Diseases 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 210000004268 dentin Anatomy 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000005562 fading Methods 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000001454 recorded image Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000005477 standard model Effects 0.000 description 1
- 208000024891 symptom Diseases 0.000 description 1
- 210000000332 tooth crown Anatomy 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61C—DENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
- A61C13/00—Dental prostheses; Making same
- A61C13/08—Artificial teeth; Making same
- A61C13/082—Cosmetic aspects, e.g. inlays; Determination of the colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61C—DENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
- A61C19/00—Dental auxiliary appliances
- A61C19/04—Measuring instruments specially adapted for dentistry
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/46—Measurement of colour; Colour measuring devices, e.g. colorimeters
- G01J3/463—Colour matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/46—Measurement of colour; Colour measuring devices, e.g. colorimeters
- G01J3/50—Measurement of colour; Colour measuring devices, e.g. colorimeters using electric radiation detectors
- G01J3/508—Measurement of colour; Colour measuring devices, e.g. colorimeters using electric radiation detectors measuring the colour of teeth
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30036—Dental; Teeth
Landscapes
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Epidemiology (AREA)
- Theoretical Computer Science (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Dentistry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Image Processing (AREA)
- Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)
Abstract
The present invention relates to a computer-implemented method, a data-processing system and data-processing apparatuses as well as computer program products for defining color shade of a tooth using a camera of a mobile communication device. On basis of a received image of a tooth of a subject that comprises a part of an image acquired with a plain camera of the mobile communication device, lighting conditions are obtained, and applicable training images are selected from a training database. Training images comprise images of a model tooth with known colors. K-means clustering is applied to the received image to obtain color code embeddings of the received image and the color code embeddings of the received image are adjusted based on the indication of lighting conditions. Obtained color code embeddings are obtained to all color code embeddings of the selected training images to find the training image with color code embeddings having lowest distance to received image’s color code embeddings, and the color shade of the tooth in the received image is determined to be equal to the color shade of a model tooth shown in the training image having the lowest distance of color code embeddings to those of the received image. Tooth color information indicative of the defined tooth color shade is communicated back to the mobile communication device from which the image was received.
Description
Determining tooth color shade based on image obtained with a mobile device
The present invention relates to a method, a system and a computer program product related to determining color shade of a tooth of a subject. More particularly, the invention relates to a method, a system and a computer program product related to determining color shade of a tooth in order to enable manufacturing of artificial teeth, an artificial tooth or a dental crown (corona artificialis) in correct color and/or in order to enable determining change in tooth color and/or in order to provide indication that the tooth may suffer from an abnormality.
Traditionally definition of color shade and selection of the color of artificial tooth or teeth on basis of the defined color shade is based on the dentist or dental technician manually comparing the tooth or the teeth of the subject to a matrix of color shade models. This has been found to be an error-prone method, often leading to need to re-produce the expensive teeth crowns, artificial tooth or artificial teeth again in different color.
Using photography as such, even with a camera of a mobile phone, for taking o 20 images based on which tooth color shade is defined has been subject to interest.
QA a Natural teeth are formed by layers of materials having different optical 2 characteristics. In particular the enamel of the outer part of the tooth affects - greatly on the color shade of the tooth, but also the dentine under the enamel
E may affect the color shade, especially if the enamel is particularly thin and/or ; 25 translucent. Thickness of the enamel varies. This complexity of the structure of 2 the teeth causes the color shade definition to become a challenging task.
N
Various methods have been introduced, which aim to increase reliability of automatic color shade recognition of a tooth. A color can easily be detected from an image as such, but the color appearing in the image may not correspond the actual color of the subject, because the process of photography includes parameters that significantly affect the result. Aspects related to lighting is one of main factors that make colors in an image appearing different from those of the actual subject, including both specific lighting arranged to fall on the subject for imaging, but also ambient lighting. Further affecting factors relate for example to the type, resolution and settings of the camera, distance between the camera and the subject and so on.
Imaging-based solutions for tooth color shade detection recognize the problem caused by effect of lighting conditions that may significantly affect the colors appearing in the image. Two main types of solutions are known in the art for calibrating the color shades: lighting conditions are standardized for example using a lighting device that closes any ambient light away that would affect the acquired image, and/or one or more standard reference colors or color shades are included in the same image with the subject tooth or teeth. These calibration methods may be used either separately or combined.
International patent application WO17166326 Al discloses a method and device o for realizing color comparison of artificial tooth using an image. A standardizing
N color comparison environment is provided having a standard light source. The
S image, as such is sent to artificial tooth production center and a technician = compares the image shown on a color corrected monitor to a selection of colors
E 25 that are under a like standard light source.
O
2 International patent application WO18080413 A2 discloses an automatic dental 2 color determination system integrated to mobile phones or tablets. Lighting conditions during acquisition of the image are standardized by using a mechanical light isolation apparatus that surrounds the mouth of the subject. A digital platform then performs dental color determination on basis of the image.
International patent application WO9956658 A1 discloses an automated tooth shade analysis. An image of the patent's teeth is acquired with black and white normalization references shown in the same image. A software is then normalizing the image using the normalization references, and the color of the tooth is obtained from the normalized image.
However, mechanical lighting calibration devices tend to be cumbersome to use, and/or the standard reference color or colors have to be always available on the spot when the image is taken. Further, over longer periods of time, the reference colors may even change for example due to dirt or fading, which may lead to misinterpretation of the tooth color shade.
Thus, a more robust and versatile solution is needed that enables easy and reliable determining of tooth or teeth color shade.
An object is to provide a method and apparatus so as to solve the problem of determining tooth color shade. The objects of the present invention are achieved with a method performed in a server according to the claim 1 and with a method performed in a mobile communication device according to the claim 8. The objects of the present invention are further achieved with a computer program
N product according to claim 12, a data-processing system comprising means for
NN carrying out the method steps, with a data-processing apparatus according to the
E claim 13 and a mobile communication device according to claim 14. i The preferred embodiments of the invention are disclosed in the dependent = 25 claims. 3 > According to a first method aspect, a computer-implemented method of defining color shade of a tooth using a camera of a mobile communication device is provided. The method comprises receiving an image of a tooth of a subject,
wherein the received image comprises a part of an image acquired with a plain camera of the mobile communication device and obtaining indication of lighting conditions of the received image. The method also comprises selecting applicable training images from a training database, wherein each training image comprises an image of a model tooth with known color and applying K-means clustering to the received image to obtain color code embeddings of the received image. The method further comprises adjusting the color code embeddings of the received image based on the indication of lighting conditions, comparing the obtained color code embeddings to all color code embeddings of the selected training images to find the training image with color code embeddings having lowest distance to received image's color code embeddings and defining the color shade of the tooth in the received image to be equal to the color shade of a model tooth shown in the training image having the lowest distance of color code embeddings to those of the received image. The method also comprises communicating tooth color information indicative of the defined tooth color shade back to the mobile communication device from which the image was received.
According to a second aspect, area of the received image is divided into a matrix with a plurality of cells, each cell representing one part of the tooth, and wherein color code embeddings are defined for each cell of the matrix.
According to a third aspect, the training database is one of a private training o database and a global training database. & > According to a fourth aspect, the received image is associated with a label 2 indicating lighting conditions in which the received image was acguired. = According to a fifth aspect, the adjusting the color code embeddings comprises © 25 deducting from the obtained color embeddings of the received image a difference 3 between color code embeddings of a global reference image and a calibration i image acguired with the mobile communication device in lighting conditions that approximately correspond to the lighting conditions in the received image.
According to a sixth aspect, the applicable training images comprise a plurality of training images each associated with a label that indicates that the respective training image has been acquired in approximately similar lighting conditions with the received image and the calibration image and a label indicating actual color 5 shade of a model tooth shown in the respective training image.
According to a seventh aspect, the adjusting the color code embeddings comprises calculating a magnitude difference between color code embeddings of the received image and color code embeddings of a calibration image acquired with the mobile communication device in approximately similar lighting conditions.
According to an eighth aspect, applicable training images comprise a plurality of training images associated with magnitude difference information, and the training image having the lowest distance is defined by comparing the magnitude difference of the received image and magnitude differences associated with the applicable training images.
According to a ninth aspect, the method further comprises exporting at least part of the tooth color information to another application or data processing system.
According to a tenth aspect, the method further comprises at least one of: providing the defined tooth color information to be used as basis for o 20 manufacturing an artificial tooth or artificial teeth that have the defined color
S shade, comparing the defined the color shade to a color shade of the same tooth
S of the same subject obtained previously for determining change of color shade of - the tooth, and providing an indication that the defined tooth color information = indicates an abnormality in the tooth. © 3 25 According to another aspect, a data-processing apparatus is provided comprising
S means for carrying out the method according to any of the above aspects.
According to an elenventh method aspect, a computer-implemented method of defining color shade of a tooth using a camera of a mobile communication device is provided. The method comprises acquiring an image of teeth of a subject with a plain camera of the mobile communication device, receiving, via the user interface of the mobile communication device, determination of an area in the acquired image that comprises one tooth, pre-processing the acquired image to produce an image of the one tooth for uploading, and associating a label with the image that indicates lighting conditions in which the image was acquired. The method further comprises uploading the image of the tooth to a server for obtaining indication of lighting conditions of the received image on basis of the associated label, for selecting applicable training images from a training database, wherein each training image comprises an image of a model tooth with known color, for applying K-means clustering to the received image to obtain color code embeddings of the uploaded image, for comparing the obtained color code embeddings to all color code embeddings of the selected training images to find the training image with lowest distance to uploaded image's color code embeddings, and for defining the color shade of tooth shown in the uploaded image to be egual to the color shade of a model tooth shown in the training image having the lowest distance of color code embeddings to those of the uploaded image. The method also comprises receiving by the mobile communication device tooth color information indicative of the defined color shade of the tooth of the subject comprised in the uploaded image.
According to a twelfth method aspect, area of the uploaded image is divided into
N a matrix with a plurality of cells, each cell representing one part of the tooth, and > wherein color code embeddings are defined for each cell of the matrix.
O
2 According to a thirteenth method aspect, the training database is one of a private
E 25 training database and a global training database. © > According to a fourteenth method aspect, the method further comprises o
S associating, before uploading, the uploaded image with a label indicating lighting conditions in which the image was acguired.
According to a fifteenth method aspect, the method further comprises at least one of: providing the defined tooth color information to be used basis for manufacturing an artificial tooth or artificial teeth that have the defined color shade, indicating a result of comparison of the defined the color shade to a color shade of the same tooth of the same subject obtained previously for determining change of color shade of the tooth, and providing an indication that the defined tooth color information indicates an abnormality in the tooth.
According to another aspect, a mobile communication device is provided comprising means for carrying out the method according to any one of the the eleventh to fifteenth method aspect.
According to a further aspect, a computer program product is provided having instructions which when executed cause a computing device or system to perform a method according to any one of the above method aspects.
According to a further aspect, a data-processing system is provided comprising means for carrying out the method according to any of the above method aspects.
The present invention is based on the idea of acquiring an image of teeth with a standard camera provided in a mobile communication device and using artificial intelligence obtainable with use of a combination of machine learning methods for teaching and enabling the system to recognize exact colour shade of the tooth o 20 of previously unknown colour shown in the image. The determined colour shade
S may then be utilized for manufacturing artificial teeth or tooth crown. The present
S invention has the advantage that the dentist can use a simple, plain mobile - communication device camera, for example a mobile phone or tablet computer = camera, to acguire a digital image of the teeth of a patient, while the system © 25 enables accurately determining color shade of the tooth on basis of the acquired 3 digital image with high reliability without requiring exact calibration of parameters
N that have an effect on the colors appearing in the acquired digital image. Thus, success rate of selecting correct color shade matching with that of the patient's actual tooth color shade for manufacture of an artificial tooth or teeth is high, and risk of need for costly re-manufacturing of the artificial tooth or teeth due to erroneous color selection is reduced. By utilizing the artificial intelligence with machine learning methodologies, use of various cumbersome apparatuses or devices or methods known in the art by which the digital image could be calibrated into exactly repeatable parameters can be avoided. Further, the system facilitates protection of identity of the patient and the user.
In the following the invention will be described in greater detail, in connection with preferred embodiments, with reference to the attached drawings, in which
Figure 1 is a schematic representation of a system
Figures 2a to 2d illustrate pre-processing of the images.
Figure 3 illustrates an exemplary process of determining a tooth shade as seen by a user
Figure 4 illustrates an exemplary process of determining tooth shade as seen by the system
Figure 5 illustrates a process of handling a training image
Figure 6 illustrates a process of defining tooth color shade
N Detailed description
N
& The figure 1 illustrates a system according to the invention. This exemplary © 20 system serves three different users, for example dentists, each having their own
I mobile devices (100a, 100b, 100c), each equipped with a camera and with mobile o data connectivity. Any number of users may use the actual system. Each mobile 2 device (100a, 100b, 100c) is capable of connecting to a server (110) over a data o 5 connection. The server (110) carries responsibility of the intelligence in the
N system. The server comprises or is associated with at least one image storage means (120, 125a, 125b, 125c, 130, 140).
The system utilizes at least three types of images, namely training images, calibration images and production images. The term ‘training image’ refers to an image that is used for training an artificial intelligence mapping function to map an image representing a tooth or teeth of a patient to a particular tooth shade.
The term ‘calibration image’ refers to an image that is used for calibrating lighting conditions. The term ‘production image’ refers to an image representing teeth/tooth of an actual patient, the color of which is to be determined. Preferably, each of the training image, calibration image and production image comprise an image of a single tooth that covers at least majority of the visible area of a single — tooth.
Training images comprise one or more collections of images used for training the tooth color shade determining system. The term training database refers to a collection of labeled training images, and training data associated with the respective training images as training data. Training data may be associated with — the training images as labels. Training databases may be logically divided into private training databases (125a, 125b, 125c) and global training databases (120).
In a private training database (125a, 125b, 125c), labels associated with each training image preferably comprise an identifier of the user, one or more identifiers of the training environment and attributes of the tooth color shade. o The identifier of the user may be for example a username. Identifiers of the
O training environment may comprise time at which the image was taken, name of
S the venue as given by the user, GPS coordinates, light source, model of the 2 camera, phone model and calibration image name. Light source may for example
E 25 — define name and/or model of the light source in the venue as given by the user.
O Attributes of the tooth color shade include the color shade to which the image 3 is/was compared with (for example Al) and color code embeddings. If the image
S was taken from a standard tooth shade guide, the attributes of the tooth color shade include the known standard shade. If the training image is taken from a real tooth, there is a non-zero likelihood that there is an error in the training data.
The global training database preferably comprises the same data as a private training database, except the user identification data and the name of the venue as a user entered data.
Training images in a private training database may comprise identification of the user, but training images in a global training database (120) should not comprise any user identification for ensuring user privacy.
Images, including training images, calibration images and production images are preferably labeled differently according to their intended use, origin and other information associated with the images for example by the user, and processing, storage and use of the images by the server (110) is based on the labeling.
Labeling enables flexible use of images. For example, images from a private training database may be included in the global training database by relabeling.
However, to maintain full control of privacy of users as well as quality and content of the global training database (120), such inclusion is preferably only performed by operators or supervisors of the system on consent of the respective user. The term label refers to any type of additional data associated with images. Labels may also be referred to as metadata.
A training database (120, 125a, 1250b, 125c) may be used for color shade determination of production images acquired using camera of any one of the mobile devices (100a, 100b, 100c). Alternative to the global training database
N (120) each user can build their own private training database (125a, 125b, 125c) > accessible only by the respective user using his/her mobile device (100a, 100b, 2 100c) or by other internet-capable device that is capable of authenticating the = user. The private or global training databases comprise data associated with each © 25 of a plurality of training images that are used to teach the system to correctly 2 recognize color shade of a tooth. o
S Although various training databases (120, 125a, 125b, 125c) are illustrated in the figure 1 as separate elements, this is only intended to illustrate a logical separation of training databases (120, 125a, 125b, 125c). Depending on system setup, training databases (120, 125a, 125b, 125c) as well as storage means (130) for storing calibration images and storage means (140) for storing production images may reside in same or different physical storage devices as known in the art. Likewise, the server (110) may be a single physical server or a plurality of physical servers, or the server (110) may be implemented in a virtual server or in a cluster of virtual servers each providing service to one or more users (100a, 100b, 100c) as known in the art.
The training images stored in the private or global training databases are used for training an artificial intelligence mapping function to map an image representing a tooth or teeth of a patient to a particular tooth shade. For training purposes, a plurality of images (200) of teeth of the patient is acguired by the user with the normal, plain camera of the mobile device. Each acguired image is pre-processed at the user device before the training image is sent to the server.
The pre-processing will be described in more detail later on. Preferably, images used for a particular private training database are always acquired using the same mobile device of the user, which device comprises a camera and a tooth color shade application for processing and labeling the images. In the following, we can refer to the tooth color shade application running at the user device simply as ‘the user application’. Likewise, the tooth color shade application running at the server may be referred to as ‘the server application’. The user may choose to build several private training databases for different lighting conditions. This may
N be the case if the user works and takes pictures in different venues, or if amount > of ambient light in a particular venue varies greatly depending on for example 2 time of day or time of year.
E 25 Figures 2a to 2d illustrate examples of pre-processing of acquired images. The © same process is in principle applicable to all types of images used in this system, 3 in other words for training images, calibration images and production images. In
S this example, the acguired image shows mouth and teeth of a subject.
Alternatively, in particular when a calibration or a training image is acguired, the image may show a model tooth on arbitrary background. After acguiring the image with the camera of the mobile device, the user preferably indicates an area showing a single tooth (210) in the acquired image (200). Preferably, the user indicates the selection of the single tooth (210) by cropping the image (200) as illustrated with the grey shading in the figure 2b so that only the single, selected tooth (210) is shown in the cropped image (200°). Alternatively, the user may be enabled to select a part (200”) of the image as illustrated in the figures 2c and 2d, and the user application may automatically crop the image by removing image data outside the area selected by the user. In yet further alternative, information on the selected area may be included in the acquired image, which is uploaded in its entirety to the server, wherein only the selected area will be processed for tooth color determining. The term uploaded image refers to the cropped image (200") or the selected part of the image (2007) that is uploaded to the server for processing. Preferably only the part (200°, 200”) of the original image (200) that represents the selected tooth (210) is uploaded to the server for processing, since this saves amount of data to be uploaded and bandwidth needed for swift image data uploading. Preferably, majority of the area of the selected tooth (210) is included in the uploaded image. Since the shape of the cropped or selected area may not fully match with the shape of the tooth appearing in the original image, part of the peripheral area of the selected tooth (210) may be outside the uploaded image. However, this is acceptable, since it is more important that the uploaded image (200’, 200”) does not include any o foreign objects such as lips, gum or neighboring teeth, since these would add
O noise in the color determining algorithms. 3 © The user application is preferably operable for performing the cropping of the image or selecting the area in the image for automatic cropping. Alternatively,
E the user could use other image processing software in the user device for ; cropping the image and only then associate the cropped image with the user 2 application.
The uploaded image (200, 2007) showing the selected tooth (210) may be divided into a matrix (220) having a plurality of cells, and color shade is defined for each of these cells. Preferably, the cells of the matrix (220) have approximately of equal areas. Defining the matrix (220) is preferably performed at the server, by the server application, the matrix (220) may also be defined by the user application. Dividing the area of the uploaded image (200, 200”) into a plurality of cells enables taking into account variation of the tooth color shade between different parts of the selected tooth (210). According to a first exemplary embodiment, a 3*3 matrix may be used as illustrated in the figure 2b, and according to another exemplary embodiment, a 3*4 matrix as illustrated in the figures 2c and 2d. Alternatively, a smaller or larger matrix (220) may be used, for example 2*2, 2*3, 4*4, 4*5, 4*6, 5*5, or 5*6 matrix. Decreasing the size of the matrix reduces the accuracy of color shade determining in different parts of the tooth, but tests have indicated that sufficiently accurate color shade determining of different parts of the tooth can be achieved for example with a 3*3 matrix.
The cropped image or the selected area of the image is preferably rectangular, as seen commonly in the art, but it can also be free form as illustrated in the figure 2d to facilitate inclusion of majority of the area of the selected tooth (210) in the uploaded image (200°, 200") for tooth color shade determining processing, without including any noise caused by unwanted objects in the part of the acquired image to be analyzed, such as neighboring teeth, gum, tongue or lips.
If a free form selection is used, the cells of the matrix may have mutually different
N shapes and sizes especially in the outer edge cells of the matrix.
N
S Figure 3 illustrates an exemplary high-level process of determining a tooth shade 2 as seen by a user using the user application. A user interface towards the system
E 25 is preferably provided by the user application running on a processor of a mobile 2 device used by the user. 3
S After installing the user application at the mobile device of the user and signing - up as user of the user application, the user first acquires a calibration image using the camera of the mobile device in the step 301. At registration, a unique identification is generated for the user. In this example, the user is identified with identification ‘U1’. The calibration image is to be acquired in the normal working lighting conditions at the user's premises, for example at a dentist's reception. If the user works in more than one venue at different times, separate calibration image is preferably acquired for each of the venues, since the lighting conditions likely vary between these significantly. The user application allows the user to tag each calibration image according to the venue it was taken in. Further, geographical location of the calibration image may be associated with the calibration image and used as identification of the venue. The venues may be named in any manner. For example, a simple numbering or alphabet naming may be user. Preferably the user may name the venue freely, reflecting the actual name of the venue such as “Discovery Bay Office”, “Central Office” and so on.
This name is shown to the user in his/her user application. This way it becomes easier for the user to select the correct venue each time he/she uses the user application subsequently. The calibration image naming functionality of the user application may be utilized for naming calibration images taken at different times in the same venue. This way the user may acquire and store a separate calibration image for example for lighting conditions in morning light, afternoon light and evening lights from the window. In later examples, we will refer to different lighting conditions with references like ‘L1’,’L2’ for simplicity.
The calibration image thus acquired is used for an initial lighting calibration. The
N calibration image is preferably an image of a tooth sample with predefined color, > for example Al color shade as provided in a standard shade guide, such as Vita 2 Classic Shade Guide used since 1960's, using the light source in the dentist's > 25 office. The calibration image is cropped, or the area of the sample tooth is selected
E as explained above in relation to images 2a to 2d. In the step 301, the calibration ; image is uploaded to the server (110) and stored as a calibration image at the 2 memory device (130) at or associated with the server. The stored calibration
N image may be labeled as a calibration image for this particular user (U1) together with the user-given name for the calibration image. The calibration image is a reference that is used for determining approximate lighting conditions (L1). After processing the calibration image, the result, color code embeddings of the calibration image, is compared to color code embeddings of a global reference image. While the calibration image and the global reference image represent a sample tooth of known color, preferably Al color, a difference e can be calculated between the color code embeddings of the calibration image and the embeddings of the global reference image.
Calculation of the difference ‘e’ can be expressed with a mathematical representation. Let the lighting conditions in the calibration image be L1, and the model tooth shown in the calibration image has shade Al. Thus, color code embeddings of the calibration image are associated with parameters (L1+A1).
Color code embeddings of the corresponding global reference image is associated with parameters Global(L+A1). The difference in the color code embeddings is then defined as e=Global(L+A1) — (L1+A1). This difference e is then stored and used for calibration in training, testing phases while generating and/or testing the training database as well as in the production phase.
Calibration image is used to enable compensation of differences in lighting and thereby indirectly cancelling differences in factors affecting colors seen in the image, including but not restricted to ambient light, camera clarity, indoor air quality problems. However, it should be noticed that the color shade determining o system is capable of adapting the color shade determining such that lighting
O conditions in production images do not have to be exactly the same as in the
S calibration image.
O
= The user is provided with two alternatives in the step 302. © 25 In the first alternative, the user may choose to generate his own, private training 3 database by performing a training process in the step 303. This training process
N will be explained in detail later. The term training database refers to a processed training database that is operable to define tooth color shades.
Second alternative is that the user enables use of global training database, which is provided in the global training database by the system provider, ready to be used by any user.
Preferably, if the user selects use of global training database, use of the private training database is no longer enabled, and the system will subsequently only use the global training database provided by the system provider and the user device is auto-calibrated using the above explained calibration technique. If the user initially selects use of a private training database, he retains the option to change to use of the global training database as illustrated with the arrow back to the selection step 302. Such re-selection may occur at any step after first selecting use of private training database.
For protecting user privacy, the images recorded in the training databases include no patient identification information. When training images taken by a user are included in his/her private training database, the images include label indicating the user identity (Ul). Recorded images represent a tooth or a part of a tooth from which the user or the patient is not possible to be identified. No mix of private and global training database is allowed without user's permission. Only the administrators of the system providing company can access the global training database using his administration permission login to the system.
In one embodiment, illustrated with step 313, inclusion of private training
N database into the global training database may be enabled. However, such 3 inclusion can only be performed by administrators of the system in response to © acceptance of such inclusion by the respective user, and any user identification = (U1) of the included private training database is removed in this process. Global © 25 training database should never include any identification of users from whom 2 such training database originates from to protect user's privacy. o
S After training database has been selected and, in case of use of the private training database was selected, reguired training has been performed, the system is ready to be used for determining tooth color shade. In the step 304P, a production image, in other words an image acquired by the user using his mobile device and representing a patient's tooth of unknown color, is obtained an uploaded to the server as explained in connection to figures 2a to 2d, and the server application performs color shade determination on basis of intelligent comparison of the production image with the private training database.
Correspondingly in the step 304G, a production image, in other words an image acquired by the user using his mobile device and comprising teeth of unknown color, is obtained, processed at the user device and uploaded to the server, which performs color shade determination on basis of intelligent comparison of the production image with the global training database. The major process steps are thus the same, except that the training database is selected differently on basis of the user's choice. As explained in connection to figures 2a to 2d, the acquired image may be cropped in the user device or the appropriate area of the image that includes the selected tooth is marked on the image before uploading the image to the server for processing. The matrix dividing the selected tooth into a plurality of areas to be processed may be defined at the user device or at the server. The image to be uploaded may be appropriately labeled by the user device and/or by the server. If user identity is associated in the uploaded image, it can be stored as a label associated with the image. Also, a label indicating lighting conditions may be associated with the uploaded image. If there are more than one calibration images stored for the particular user, the user preferably selects o one lighting condition among the venue and/or lighting condition names
O associated with and thus identifying his/her calibration images, and the respective
O lighting condition label (L1) is associated with the uploaded training images and © 25 production images using the user application, that best matches the current
E lighting conditions.
O
2 Likewise, steps 305P and 305G are mutually similar, in response to uploading the 2 production image to the server, he receives information that indicates the
N automatically defined tooth color shade.
The user may repeat steps 304G and 305G or steps 304P and 305P for a plurality of production images. The user may also choose, at any point, to obtain another calibration image. This may occur for example when he/she first time uses the user application at a new venue.
Figure 4 illustrates an exemplary high-level process of determining tooth shade as seen by the system.
In the step 401, the system receives and analyzes a calibration image from a registered user (U1).
When the uploaded calibration image is analyzed at the server, a fingerprint of the user is created that comprises a unique identification of the user (U1), his unique calibration lighting condition (L1) and information associated with color code embeddings associated to the predefined color (Al) of the standard Al tooth as it appears in the calibration image. The data structure stored on the server thus comprises information U1 + L1 + Al.
Bitmap format is preferred for all image types, since compression of the image reduces colors in the image, which would subsequently reduce quality of the color shade determining. Image coding preferably utilizes one of color spaces commonly used in computer graphics, such as RGB (Red-Green-Blue), CMYK (Cyan-Magenta-Yellow-Black) or YCbCr. Lighting condition is defined by analyzing o 20 the calibration image. & © One possible method of defining a lighting condition (L1) is to first process the © calibration image using K-means clustering algorithm, which is known in the art
I of image processing. The calibration image or each cell of the calibration image o may be simplified into a limited size color code embeddings matrix using the K- 2 25 means clustering algorithm. For example, the calibration image or each cell of 2 the calibration image may be simplified into a 1x12 color code embeddings matrix. The determined lighting condition (L1) may be stored as a label associated with the image. For example, when a 1x12 matrix is used as lighting condition, the label comprises the 1x12. In an exemplary embodiment, the 1x12 matrix may be defined as follows: first 3 items of the 1x12 matrix may include the proportions of the 3 most often detected colors in the image. The remainder of the 1x12 matrix may then comprise data indicative of the respective colors.
For example, RGB coding may be used. 3 items of the 1x12 matrix may be indicative of RGB values, for example the value of R of the first, most often detected color corresponding to the first of the first 3 items, then the value of G of the first color and then B of the first color. Similarly, another 3 items of the 1x12 matrix may represent RGB values of the second in order most often detected color. The remaining 3 values are the 1x12 matrix may represent RGB values of the third in order most detected color and its RGB values. In an alternative embodiment, the most often detected colors defined in the label may be arranged in any order; in other words, the colors do not have to be in order of proportions of appearance of the colors. — Various labels associated with the images may have a predefined format or free text form, and they may include but are not limited to a label identifying at least one of a venue name and a time of day or time of year. The label may also be a combination of a free text defined by the user and one or more system provided labels having predefined form, which may or may not be shown to the user.
Further, the calibration image metadata stored as labels associated with the image may include information on location where the calibration image was
N taken. This enables the user application to subseguently automatically select or > propose to the user to select the appropriate calibration image for each 2 subseguent training and/or production image on basis of the location. Location > 25 data may be actual geographical location defined for example on basis of a GPS,
E GNSS, GLONASS or alike positioning system available in the user device, but ; location information may also be defined in any other way known in the art, for 2 example on basis of available local area network names.
N
If the user selects to use a private training database, the server next receives a plurality of training images in the step 402 and processes them to generate a fully functional private training database. If the user selects using global training database, step 402 may be omitted. The private training database preferably comprises processed training database based on a plurality of training images that are taken by the same user (U1) and that have the same lighting conditions (L1). When acquiring training and production images, the location may be suggested by the user application on basis of detected current location of the user and on basis of locations for which calibration images have been provided.
Selection can be fully automatic, or the user may manually select and/or accept a suggestion made by the user application. If there are several mutually very close locations to select from, the application preferably gives a list of the most likely locations in some order of preference from which the user may select the correct one. Such listing and selection may be necessary if the user for example operates in the same building but in different rooms and/or in different floors, so that geographical location is not significantly different. Details of handling of — training images will be described later in connection to the figure 5. Global training database comprises training database that is based on a plurality training images acguired and uploaded by either the system provider or by anonymous users, which training database comprises training database for a plurality of different lighting conditions.
After processing the calibration image, the system is ready to receive production image in the step 403, representing a tooth with unknown color.
O
S The production image is then analyzed in the step 404 to determine the color
S shade of the tooth. Color shade analysis is performed using a process that will be 2 disclosed in more detail in connection to the figure 6.
I
N 25 After analyzing the color shade of the tooth, the result is communicated and/or 2 provided back to the user in the step 405. Communication may be performed for 2 example by showing in the user interface of the user application the determined - color shade of the tooth or, when a matrix (220) is used, the determined color shade of each cell in the matrix. The application also enables the user to export tooth color information as an export file in any applicable computer coding format known in the art. For example, a pdf-document may be exported. Such exported file comprising tooth color information can subsequently be used in communication with a dental laboratory that manufactures the artificial tooth. For example, the exported file may be attached to an email, or the exported file can be automatically transferred to other computer systems over any type of data exchange capable interface known in the art of computer networks and/or mobile devices. Exported and/or communicated tooth color information may further comprise color code embeddings of the analyzed production image.
Figure 5 illustrates an exemplary process of handling a training image when training of a training database is performed. This process may be referred to as a light calibration method. Same process may be applied to training both the private training database and the global training database, although the source of the received training image may be different.
When the training, in other words acquisition of a plurality of training images to create a training database, is performed by the user, the user (Ul) can start training his own private database by acquiring a plurality of training images using his camera in his own lighting conditions (L1) corresponding to those of a previously acquired calibration image. The user (Ul) preferably acquires images of teeth of different colors, for example model teeth with different known colors o according to a standard tooth color shade system. Before or after acguiring a
O training image, and preferably before uploading the training image, the user
S labels the training image with a known color shade according to the model tooth. 2 For example, when is used, model teeth may have colors Al, A2, A3, A3.5, A4,
E 25 BI, B2, B3, B4, C1, C2, C3, C4, D2, D3, D4, and the respective training images © are labeled accordingly. Only single-color teeth or standard colored model teeth 3 should be used for training. The acquired training image and the respective tooth
S color information is uploaded and stored in the private training database associated with the server.
The global training database is trained in a similar manner, but preferably a plurality of training images taken for each of a plurality of different lighting conditions and known tooth colors.
In the step 501, the training image is received at the server. The received training image is labeled with information regarding the known standard color of the model tooth shown in the training image. If the training image is used for training private training database, the training image will also be tagged with information regarding the user and his initial lighting conditions.
In the step 502, the lighting information associated to this training image is obtained by the server. In this method, lighting information preferably comprises the lighting label L1 associated with a calibration image and the difference e that was defined on basis of the respective calibration image.
In the step 503, an unsupervised learning algorithm is used to analyze the training image. The same unsupervised learning algorithm may be used for defining colors in all image types. For example, K-means clustering algorithm known in the art may be used, which finds groups in the data. K refers to number of groups. The algorithm works iteratively to assign each data point of the image, in other words each pixel of the image, to one of K groups based on the features that are provided. Data points are clustered based on feature similarity. When processing training images, the results of the K-means clustering algorithm are:
S
N 1. Centroids of the K clusters, which can be used to label new data
S
O 2. Labels for the training database (each data point is assigned to a single
I cluster) = In the current invention, unsupervised learning is applied to convert a large 3 25 matrix, for example a bitmap acquired with the camera into a smaller matrix,
N which in our case includes predominant color codes in the acquired image.
In the step 504, as a result of the K-means clustering algorithm, color code embeddings of the training image are first defined. The defined training image color code embeddings are then adjusted by deducting the difference e defined on basis of the calibration image from the defined color code embeddings. Thus, for an Al tooth training image a color code embedding EMBS=(L1+A1)-e is stored, for an A2 color tooth training image a color code embedding
EMBS=(L1+A2)-e is stored, for an A3 color tooth training image a color code embedding EMBS=(L1+A3)-e is stored, and so on. For example, when K-means clustering algorithm is used, input to the K-means clustering algorithm is preferably a cropped tooth image that only comprises a single tooth or majority of a single tooth. For example, the uploaded image may have size of 500x500 pixels. As indicated above, instead of action known as cropping, using the user interface, the user may determine in any other way an area that represents an area showing the one tooth, and the system may automatically crop before the image is uploaded. For accurate tooth color shade determining lips, gum and other teeth must be cropped away or limited outside the selected area that is to be processed by the K-means clustering algorithm. Further, the area shown in the received image is preferably handled as a matrix of a plurality of cells. Output of the K-means algorithm may be for example a 1x12 matrix called as color code embeddings, which only contains K=12 different color codes. This color code embeddings matrix represents then 12 most likely color code embeddings of the o selected tooth. When the selected tooth is handled as a matrix of a plurality of
O cells, the output of the K-means algorithm is color code embeddings for each of
O the plurality of cells. These embeddings are then adjusted by deducting the © 25 difference defined on basis of the corresponding calibration image.
I
E After storing a plurality of adjusted color code embeddings for a plurality of ; training images representing sample teeth of known color in approximately same 2 lighting conditions L1, the training database is ready to be utilized by a supervised
N learning algorithm to determine color shade of a tooth with unknown color in the approximately same lighting conditions.
In the step 505, a supervised learning algorithm is applied for determining color shade of a tooth. This step can be used for testing quality of the training data and adding new training images in the training data as well as for determining color shade of a tooth in a production image. Testing refers to obtaining images of known color shade tooth, but not indicating this color to the application. Thus, the application will handle the image as it was a normal production image, and the user may compare the result to the actual known color of the sample tooth.
Preferably, Support Vector Machine (SVM) algorithm is used as the supervised learning algorithm. The objective of the SVM algorithm is to find a hyperplane in an N-dimensional space (N-number of features) that distinctly classifies the data points, namely the color code embeddings received from the K-means clustering algorithm. SVM is found to be particularly useful to identify matching color code embeddings with the lowest distance to the training image color code embeddings in order to predict the shade of the teeth. For example, if the user has trained the model with one image of Al color model tooth, the comparison is done during the testing stage to that one training image only. If the user has trained the model using a plurality of images of Al color model tooth, for example 50 images, the color code embeddings will be compared with color code embeddings of all those
Al tooth color training images as well as to color code embeddings of training images representing any other standard color mode teeth, such as A2, A3 and so on.
N
S If the acguired image was a training image, the training process utilizes the
S combination of the unsupervised training algorithm in the step 503 and the 2 supervised training algorithm, and the analyzed new training image is included in
E 25 the supervised training model at the step 505. If the acguired image is a
O production image, it is not included in the training model, but obtained color 3 shade information is communicated back to the user. &
If acquired image was a testing or production image, the uploaded image does not have associated any information on the expected color of the tooth shown in the uploaded image. After defining initial color code embeddings for the uploaded image, the color code embeddings are then adjusted by deducting the difference defined on basis of the calibration image from the defined color code embeddings.
Supervised training algorithm is then applied to adjusted color code embeddings of the uploaded image, comparing it to all color code embeddings in the respective training database. As a result, the supervised training algorithm decides which tooth color shade has the closest distance to that of the uploaded image and provides this determined color shade as the result to the user.
The acquired private training database may be merged with a global training database after a check has been performed to the private training database by administrators of the system provider. This avoids uploading erroneously tagged or otherwise erroneous images in the global training database. When merging a private training database with the global training database, identity of the user is preferably removed from the training images. In the process of merging, identity of the user is removed by not saving the identity information (U1) of the record as it has become unnecessary and potentially would be used for user identification afterwards.
After acquiring sufficient amount of training images, for example at least 10 training images, the user can start acquiring and uploading actual production images, each representing a tooth of a patient for color shade determination using o his own private training database. Alternatively, the user can select use of a global
O training database.
S
© The server application is configured to compare calibration information of the user = and the private training database with the global training database. If the server o 25 application detects on basis of such comparison that the global training database 2 includes enough images taken in lighting conditions L1, the application may 2 propose to the user that he could use the global training database instead of the - private training database. Sufficient number of images may be for example at least 400 images, preferably at least 500 images and more preferably at least 500 images.
Tooth color shade may also be an indication of an abnormality in the tooth.
Training images can also be taken from a real tooth with abnormality, such as abnormal color or shape of the tooth that indicates for example that the tooth is dead, it has caries or any other tooth disease. Instead of selecting a standard model color, the user may label the image as representing a tooth with abnormality. The label given by the user preferably names the type of abnormality. Preferably, the uploaded image is in this case cropped to represent the area with the abnormality. The training image is then stored in the training database similarly to any other training image. Thus, the same training database may provide means for determining tooth color shade and/or an indication of possible abnormality. Alternatively, different training databases may be defined for different purposes/uses.
In an alternative embodiment, the steps of figures 4 and 5 are performed in the similar manner, but light calibration and training are done in an alternative manner.
In the alternative embodiment, no global reference image is used for calibration.
Instead of calculating a difference e between the color code embeddings of the calibration image and the global reference image, the color code embeddings of
N the calibration image are used as such, and a difference is calculated between > the color code embeddings of the calibration image and the training image. 2 Calibration image stored in the database is associated with information (L1+A1) = that corresponds to the color code embeddings of the calibration image. In the © 25 training phase, color code embeddings for the obtained training images are 2 adjusted in the step 504 by deducting the color code embeddings of the 2 calibration image as such from the color code embeddings of the training image.
N Thus, a training image representing a model tooth with shade Al in lighting conditions L1 will initially receive color code embeddings (L1+A1). Adjusted color code embeddings for model tooth of color Al are obtained then by calculating
EMBS = (L1+A1)-(L1+A1), which ideally gives ideally a value of O. In practice, the value may, however slightly deviate from 0. Adjusted color code embeddings for model tooth of color A2 are obtained by calculating EMBS=(L1+A2)- (L1+A1)=(L1+A1+x)-(L1+A1)=x. Value "x” thus represents distance of the color embeddings of an image of A2 colored sample tooth from the calibration image.
Likewise, adjusted color code embeddings for model tooth of color A3 are obtained by calculating EMBS=(L1+A53)-(L1+A1)=(L1+A3+y)-(L1+A1)=y. Value "y” thus represents distance of the color embeddings of an image of A3 colored sample tooth from the calibration image. Similar calculations are performed to all training images representing different model tooth colors. This method may be referred to as "magnitude comparison” and this variant of the method, the magnitude difference between the Al and other shades are stored and used for the analysis. The alternative embodiment is particularly useful when global — training database is used, since it does not restrict selection of applicable training images to any specific lighting conditions. However, either of the embodiments may be used with global and private training databases.
Since received and stored values are just relative difference in magnitude in comparison to the A1 model tooth and approximately similar lighting conditions
L1, lighting and color components are cancelled from the adjusted color code embeddings that are stored and used by the supervised learning algorithm. Thus,
N these magnitude difference representing color code embeddings stored can be > used by any user, and the magnitude differences may thus be used for processing 2 any production images of any user in any lighting conditions. Like any color code > 25 embeddings, magnitude difference may be defined as a vector. The magnitude
E difference vector is preferably of same form and size as the color code > embeddings.
O
2 In analysis of further testing and production images, color code embeddings of an image in any lighting conditions can be obtained according to equation
EMBS=(L1+Ax)-(L1+Al1)=(L1+Al1+x)-(L1+Al)=x. The supervised learning algorithm may then be applied in the phase 505 to determine the shade of the tooth shown in the uploaded image by finding the color code embedding in the training database that has the lowest distance to this obtained color code embeddings “Xx”.
Figure 6 illustrates a process of defining tooth color shade.
In the step 601, an image is received that shows at least one tooth or teeth of a subject. The color shade of the tooth is unknown.
In the step 602, applicable calibration image in the selected training database, private or global, is selected on basis of at least one of user and lighting information.
In the step 603, K-means clustering similar to explained in connection to phase 440 is applied on the received image for obtaining color code embeddings in the received image. In the preferred embodiment, the color code embeddings are all associated with colors of a single tooth. However, the same principle may be used — for detecting symptoms of a dental disease based on color of tongue or gums, or extraordinary color of the tooth, which may be indicative of for example dental caries. Further, the same principle can be used for or detecting the change of the color in the tooth/teeth over time by detecting the color from the same user on regular basis. The color of the tooth of the user may change over time due various o 20 reasons. The reasons include but are not limited to recurring user actions to
S improve tooth whitening or alternatively user diet may impact the color of the
S tooth. For such complementary application, respective part of the image should - be selected that represents the respective tooth of interest, the tongue or part of = the gums. ©
RQ 25 In the step 604, the obtained color code embeddings are adjusted to remove or = reduce effects of lighting in the obtained color code embeddings. As explained - above, depending on the method used, the adjustment may be performed either by deducting the difference e from the obtained color code embeddings or by deducting color code embeddings of the calibration image from the obtained color code embeddings. The result of the step 604 is adjusted color code embeddings.
In the step 605, the adjusted color code embeddings are compared to trained color code embeddings among the training images in the applicable training database.
In the step 606, the color code embeddings of a training image is selected that has the lowest distance to the unknown image's color code embeddings. The tooth color associated with the selected training image with the lowest distance is deemed to represent the color of the tooth in the uploaded image. The color code embeddings are preferably selected in iterative manner. In a series of iteration steps the most likely color code embeddings out of a subset of possible color code embeddings are suggested as the selected color code embeddings.
When a three-field color coding, such as RGB or YCbCr is used for color coding, a 1x12 matrix may be used for color code embeddings. Iteration may start with all possible colors, and the amount of possible colors is reduced until just 16 possibilities for color code embeddings are left. Out of these 16 possibilities, three most likely, in other words the three most commonly appearing color code embedding options are selected. Naturally, instead of the 16 possibilities used in the example, any integer number of color code possibilities may be used. The three most likely color code embeddings are included in the 1x12 color code o embeddings matrix. Preferably, three fields of the matrix indicate the relative
S amounts of the three most common color code embeddings, and the remaining
S fields are reserved for indicating the color coding for these. For example, three 2 fields of the matrix may be used to include RGB or values or YCbCr color code z 25 values of each of the three most common three colors. If a four-color model such © as CMYK is used for color coding, the color code matrix may be for example of 3 size 1x15 to allow using four fields in the matrix for each one of the three most
S common color code embeddings.
In the step 607, tooth color information indicative of the determined tooth color shade is communicated to the user. The tooth color information may comprise the color code associated with the selected training image that corresponds to the most likely color shade of the tooth. The determined tooth color information may also comprise the determined color code embeddings.
In the optional step 608, at least part of the tooth color information is exported from the system, preferably in a digital format, to another data processing system or application of the user. By exporting the tooth color information, this information can be made available for other systems or applications for further analysis. When at least part of the tooth color information is exported, it may be further processed and/or analyzed by the user or by a system or application of the user for any purpose. For example, tooth color information may be exported to another application and/or to archives of the user for later use.
According to an embodiment, at least part of the tooth color information may be exported into an application facilitating manufacturing of artificial tooth or teeth.
In another embodiment, at least part of the tooth color information may be stored into another user application to be subsequently used as basis of tooth color shade comparison. A tooth color shade comparison application may be used for example to detect changes in tooth color shade for example due to whitening or color changes due to diet. In yet another embodiment, the tooth color information o may be determined to indicate that the tooth may, based on its color, have some
S abnormality. This is possible, if the training database comprises training images
S of teeth with abnormality. Indication of likelihood of an abnormality may also be 2 provided to the user, so that he may for example examine the tooth in more z 25 detail. © > When tooth color information is exported, the step 607 may be omitted, since = the tooth color shade information can be made available to the user via the other - application or system that receives the exported tooth color information.
Main intelligence of the system resides at the server, more precisely at the server application running on the server. The user application running in the mobile device needs a data connection to the server. The user application acts as a user interface, allowing the user to obtain and upload images as well as tag tooth colors in the training images, and to receive tooth color information.
When the system is tested, every single trained color code embedding from the
SVM model is compared to the color code embeddings that corresponds to test images captured for testing and the color code embedding with the lowest distance is the resulting tooth color shade. Thus, the resulting tooth color shade may be expressed by referring to a particular tooth color as defined in the used standard tooth color shade system. Testing ensures that the system works as intended and the color shades are detected accurately.
It is apparent to a person skilled in the art that as technology advanced, the basic idea of the invention can be implemented in various ways. The invention and its embodiments are therefore not restricted to the above examples, but they may vary within the scope of the claims.
O
N
O
N
O
<Q
O
I
=
O
O
LO o
O
N
Claims (15)
1. A computer-implemented method of defining color shade of a tooth (210) using a camera of a mobile communication device (100a, 100b, 100c), characterized by - receiving (601) an image (200°, 200”) of a tooth (210) of a subject, wherein the received image comprises a part of an image (200) acquired with a plain camera of the mobile communication device (100a, 100b, 1000); - obtaining indication of lighting conditions of the received image (200’, 200”; - selecting (602) applicable training images from a training database, wherein each training image comprises an image of a model tooth with known color; - comparing (605) the obtained color code embeddings to all color code embeddings of the selected training images to find the training image with color code embeddings having lowest distance to received image's color code embeddings; and - defining (606) the color shade of the tooth (210) in the received image to be egual to the color shade of a model tooth shown in the training image having the lowest distance of color code embeddings to those of the received image; characterized in that the method comprises: - applying (603) K-means clustering to the received image (200', 200”) to obtain color code embeddings of the received image, wherein area of the received image is divided into a matrix (220) with a plurality of cells, each cell representing one part of the tooth (210), and wherein color code embeddings are defined for each cell of the matrix (200), said color code embeddings being a matrix representing predominant color codes in the respective cells of the received image; - before the step of comparing (605), adjusting (604) the color code embeddings of the received image (200’, 200”) based on the indication of lighting conditions, wherein a) the adjusting the color code embeddings comprises deducting from the obtained color embeddings of the received image (200°, 200”) a difference between color code embeddings of a global reference image and color code embeddings of a calibration image acquired with the mobile communication device (100a, 100b, 100c) in lighting conditions that approximately correspond to the lighting conditions in the received image, or wherein b) the adjusting the color code embeddings comprises calculating a magnitude difference between color code embeddings of the received image (200°, 200”) and color code embeddings of a calibration image acquired with the mobile communication device (100a, 100b, 100c) in approximately similar lighting conditions, wherein said global reference image and said calibration image represent a tooth with known color; and - after the step defining (606) the color shade, communicating (607) tooth color information indicative of the defined tooth color shade back to the mobile communication device (100a, 100b, 100c) from which the image (200", 200”) was received.
2. The method according to claim 1, wherein the training database is one of a private training database (125a, b, c) and a global training database (120).
3. The method according to any of claims 1 to 2, wherein the received image is associated with a label indicating lighting conditions in which the received image (200', 200”) was acquired.
4. The method according to claim 1, wherein the applicable training images in option a) comprise a plurality of training images each associated with a label that indicates that the respective training image has been acquired in approximately similar lighting conditions with the received image and the calibration image and a label indicating actual color shade of a model tooth shown in the respective training image.
5. The method according to claim 1, wherein applicable training images in option b) comprise a plurality of training images associated with magnitude difference information, and the training image having the lowest distance is defined by comparing the magnitude difference of the received image and magnitude differences associated with the applicable training images.
6. The method according to any of the preceding claims, further comprising - exporting at least part of the tooth color information to another application or data processing system.
7. The method according to any of the preceding claims, further comprising at least one of: - providing the defined tooth color information to be used as basis for manufacturing an artificial tooth or artificial teeth that have the defined color shade, - comparing the defined the color shade to a color shade of the same tooth of the same subject obtained previously for determining change of color shade of the tooth, and
- providing an indication that the defined tooth color information indicates an abnormality in the tooth.
8. A computer-implemented method of defining color shade of a tooth (210) using a camera of a mobile communication device (100a, 100b, 100c), the method comprising: - acquiring an image (200) of teeth of a subject with a plain camera of the mobile communication device; - receiving, via the user interface of the mobile communication device (100a, 100b, 100c), determination of an area in the acquired image that comprises one tooth (210); - pre-processing the acquired image to produce an image (200°, 200”) of the one tooth (210) for uploading; - associating a label with the image that indicates lighting conditions in which the image was acguired; and —- uploading the image of the tooth to a server for performing the method according to claim 1; and - receiving (607) by the mobile communication device (100a, 100b, 100c) tooth color information indicative of the defined color shade of the tooth (210) of the subject comprised in the uploaded image (200°, 200”).
9. The method according to claim 8, wherein the training database is one of a private training database (125a, b, c) and a global training database (120).
10. The method according to any of claims 8 to 9, wherein the method further comprises associating, before uploading, the uploaded image
(200°, 200”) with a label indicating lighting conditions in which the image was acquired.
11. The method according to any of claims 8 to 10, further comprising at least one of: - providing the defined tooth color information to be used basis for manufacturing an artificial tooth or artificial teeth that have the defined color shade, - indicating a result of comparison of the defined the color shade to a color shade of the same tooth of the same subject obtained previously for determining change of color shade of the tooth, and - providing an indication that the defined tooth color information indicates an abnormality in the tooth.
12. A computer program product having instructions which when executed cause a computing device or system to perform a method according to any one of claims 1 to 7 or any one of claims 8 to 11.
13. A data-processing apparatus comprising means for carrying out the method according to any one of claims 1 to 7.
14. A mobile communication device comprising means for carrying out the method according to any one of claims 8 to 11.
15. A data-processing system comprising a data-processing apparatus according to claim 13 and a mobile communication device according to claim 14.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IN201921012676 | 2019-03-29 |
Publications (2)
Publication Number | Publication Date |
---|---|
FI20195916A1 FI20195916A1 (en) | 2020-09-30 |
FI130746B1 true FI130746B1 (en) | 2024-02-26 |
Family
ID=70189985
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
FI20195916A FI130746B1 (en) | 2019-03-29 | 2019-10-24 | Determining tooth color shade based on image obtained with a mobile device |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP3948786A1 (en) |
FI (1) | FI130746B1 (en) |
WO (1) | WO2020201623A1 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11373059B2 (en) * | 2021-07-23 | 2022-06-28 | MIME, Inc. | Color image analysis for makeup color prediction model |
CN118160046A (en) * | 2021-10-28 | 2024-06-07 | 联合利华知识产权控股有限公司 | Method and apparatus for determining tooth color values |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6190170B1 (en) | 1998-05-05 | 2001-02-20 | Dentech, Llc | Automated tooth shade analysis and matching system |
US7064830B2 (en) * | 2003-06-12 | 2006-06-20 | Eastman Kodak Company | Dental color imaging system |
US8848991B2 (en) * | 2012-03-16 | 2014-09-30 | Soek Gam Tjioe | Dental shade matching device |
EP3105558A1 (en) * | 2013-12-05 | 2016-12-21 | Style Idea Factory Sociedad Limitada | Device for dental use for discriminating the color of teeth |
CN105662624A (en) | 2016-03-31 | 2016-06-15 | 姚科 | Method and device for realizing color comparison of false teeth |
WO2018080413A2 (en) | 2016-10-31 | 2018-05-03 | Cil Koray | Dental color determination system integrated to mobile phones and tablets |
-
2019
- 2019-10-24 FI FI20195916A patent/FI130746B1/en active
-
2020
- 2020-03-27 WO PCT/FI2020/050203 patent/WO2020201623A1/en unknown
- 2020-03-27 EP EP20717242.0A patent/EP3948786A1/en active Pending
Also Published As
Publication number | Publication date |
---|---|
EP3948786A1 (en) | 2022-02-09 |
WO2020201623A1 (en) | 2020-10-08 |
FI20195916A1 (en) | 2020-09-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10115191B2 (en) | Information processing apparatus, information processing system, information processing method, program, and recording medium | |
US7751606B2 (en) | Tooth locating within dental images | |
US8819015B2 (en) | Object identification apparatus and method for identifying object | |
CN110675368B (en) | Cell image semantic segmentation method integrating image segmentation and classification | |
Vazquez-Corral et al. | Color constancy by category correlation | |
US20150186755A1 (en) | Systems and Methods for Object Identification | |
FI130746B1 (en) | Determining tooth color shade based on image obtained with a mobile device | |
WO2013179581A1 (en) | Image measurement device, image measurement method and image measurement system | |
US20140267782A1 (en) | Apparatus And Method For Automated Self-Training Of White Balance By Electronic Cameras | |
CN112989990A (en) | Medical bill identification method, device, equipment and storage medium | |
CN111428552A (en) | Black eye recognition method and device, computer equipment and storage medium | |
Justiawan et al. | Comparative analysis of color matching system for teeth recognition using color moment | |
Montenegro et al. | A comparative study of color spaces in skin-based face segmentation | |
Lee et al. | A taxonomy of color constancy and invariance algorithm | |
CN111291778A (en) | Training method of depth classification model, exposure anomaly detection method and device | |
Barata et al. | 1 Toward a Robust Analysis of Dermoscopy Images Acquired under Different | |
CN104766068A (en) | Random walk tongue image extraction method based on multi-rule fusion | |
CN115375954B (en) | Chemical experiment solution identification method, device, equipment and readable storage medium | |
US20230162354A1 (en) | Artificial intelligence-based hyperspectrally resolved detection of anomalous cells | |
EP4372379A1 (en) | Pathology image analysis method and system | |
Kibria et al. | Smartphone-based point-of-care urinalysis assessment | |
US20220172453A1 (en) | Information processing system for determining inspection settings for object based on identification information thereof | |
US11514662B2 (en) | Detection of skin reflectance in biometric image capture | |
CN102077245B (en) | Face-detection processing methods, image processing devices, and articles of manufacture | |
P Mathai | SasyaSneha–An Approach for Plant Leaf Disease Detection |