WO2003030073A1 - Quality measure - Google Patents

Quality measure Download PDF

Info

Publication number
WO2003030073A1
WO2003030073A1 PCT/DK2002/000660 DK0200660W WO03030073A1 WO 2003030073 A1 WO2003030073 A1 WO 2003030073A1 DK 0200660 W DK0200660 W DK 0200660W WO 03030073 A1 WO03030073 A1 WO 03030073A1
Authority
WO
WIPO (PCT)
Prior art keywords
measure
image
quality
fundus
fundus image
Prior art date
Application number
PCT/DK2002/000660
Other languages
French (fr)
Inventor
Niels Vaever Hartvig
Johan Doré HANSEN
Michael Grunkin
Jannik Godt
Per Rønsholt ANDRESEN
Ebbe Sørensen
Soffia Björk SMITH
Original Assignee
Retinalyze Danmark A/S
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Retinalyze Danmark A/S filed Critical Retinalyze Danmark A/S
Publication of WO2003030073A1 publication Critical patent/WO2003030073A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0025Operational features thereof characterised by electronic signal processing, e.g. eye models

Definitions

  • the present invention relates to a method for determining the quality of a fundus image, the use of said quality for determining pathologies and/or artefacts in the image, and methods for handling said image, as well as a system comprising algorithms performing the methods.
  • Fundus image analysis presents several challenges, such as high image variability, the need for reliable processing in the face of nonideal imaging conditions and short computation deadlines. Large variability is observed between different patients - even if healthy, with the situation worsening when pathologies exist. For the same patient, variability is observed under differing imaging conditions and during the course of a treatment or simply a long period of time. Besides, fundus images are often characterized by having a limited quality, being subject to improper illumination, glare, fadeout, loss of focus and artifacts arising from reflection, refraction, and dispersion.
  • Diabetes is the leading cause of blindness in working age adults. It is a disease that, among its many symptoms, includes a progressive impairment of the peripheral vascular system. These changes in the vasculature of the retina cause progressive vision impairment and eventually complete loss of sight. The tragedy of diabetic reti- nopathy is that in the vast majority of cases, blindness is preventable by early diagnosis and treatment, but screening programs that could provide early detection are not widespread.
  • Image quality is an important parameter in automated fundus image analysis systems, as algorithms for estimating vessel geometry for detecting the optic nerve head and for lesion detection are often developed and validated using images of a reasonable quality. When images of poor quality are passed to the system, the algo- rithms may return erroneous results, which may have critical consequences, especially in an automatic screening scenario.
  • the present inventors have been able to calculate a quality measure that apart from being used in selecting images for automatic screening also may provide an indica- tor for various pathologies and artefacts of the image, since it has been found, that there is a systematic correlation between the image quality and some pathologies in or about the eye.
  • the present invention relates to an automatic method for quantifying and/or qualifying pathology indicators and/or artefacts of a fundus image or of a part of a fundus image, comprising
  • the present invention relates to a quality measure as such, the use of said quality measure for example being in selecting images for automatic screening procedures. Accordingly, the present invention further relates to a method for quantifying the quality of a fundus image, comprising
  • the quality measure may be used as a tool in a method for detecting structure and pathologies in a fundus image, such as a method for automatically detecting the presence or absence of a structure and/or a pathological condition in parts of a fun- dus image, comprising
  • the gradability measure relates to an overall measure of events in the acquisition route of the image as well as events in the image as such, wherein events leading to a low gradability measure may be hemorrhages in retina and/or large scars, wherein the events are of a size that interfluence on the detection of the structures and other pathologies.
  • the quality measure may be used as a criteria when automatically registering the images, in that poor quality images should be rejected and replaced by better quality images if possible.
  • a method for registering at least two fundus images from the same eye comprising
  • an important aspect of the present invention is a method for selecting fundus images for automatic screening because fundus images of a poor quality may lead to false positive or false negative detections in the image, due to disturbances of the algorithms because of the representation of the poor quality on the images.
  • the present invention further relates to a method for selecting fundus images for automatic screening, comprising
  • the invention further relates to a system comprising the algorithms capable of performing the methods according to the invention.
  • Figure 1 - Figure 4 Four fundus images and their gradient contrast measures.
  • the CV measure is the coefficient of variation of the gradient magnitudes in the image, and is a measure of the overall quality of the image.
  • the robust CV measure is the CV of the gradients where outliers are removed.
  • Figure 5 The robust CV of gradient magnitudes as a function of the gradient esti- mation method for the four fundus images displayed in Figure 1-4.
  • Fovea The term is used in its normal anatomical meaning, i.e. the spot in retina having a great concentration of cones giving rise to the vision. Fovea and the term "macula lutea” are used as synonyms.
  • Image The term image is used to describe a representation of the region to be examined, i.e. the term image includes 1-dimensional representations, 2-dimensional representations, 3-dimensionals representations as well as n-dimensional representatives. Thus, the term image includes a volume of the region, a matrix of the region as well as an array of information of the region.
  • Optic nerve head The term is used in its normal anatomical meaning, i.e. the area in the fundus of the eye where the optic nerve enters the retina. Synonyms for the area are for example, the "blind" spot, the papilla, or the optic disc.
  • Red-green-blue image The term relates to the image having the red channel, the green channel and the blue channel, also called the RBG image.
  • ROI Region of interest.
  • Visibility The term visibility is used in the normal meaning of the word, i.e. how visi- ble a lesion or a structure of the fundus region is compared to background and other structures/lesions.
  • the images of the present invention may be any sort of images and presentations of the region of interest.
  • Fundus image is a conventional tool for examining retina and may be recorded on any suitable means.
  • the image is presented on a medium selected from dias, paper photos or digital photos.
  • the image may be any other kind of representation, such as a presentation on an array of photo receptor elements, for example a CCD.
  • the image may be a grey-toned image or a colour image, in a preferred embodi- ment the image is a colour image.
  • the quality measure according to the present invention is an acquisition quality measure, i.e. a measure of the quality of the image related to the optical and elec- tronical parts of the acquisition.
  • the acquisition quality relates to the optical way that is the route from in front of the fundus, such as from the vitreous body until the optical means of the acquisition apparatus, being it a camera or a CCD or the like.
  • the optic system may be any part of the optic system from vitreous body, lens, cornea and camera or recorder.
  • the electronical way relates to the route of the image in the camera, CCD or the like and into the computer capable of automatically measuring the quality.
  • the quality measure is often a global measure in the meaning that the quality of the image as a total is given. It is however also possible to detect the image quality locally, for example for parts of the image. In particular for local quality events, such as locally presented artefacts this may be an advantage, since it may lead to rejection of for example only a part of the image during automatic screening and detec- tion of the rest of the image. For example a local quality measure may be denoted to more than one part of the image, and the image may then have different quality measures for different parts of the image.
  • the quality measure may be calculated for parts of the image and give rise to a global measure, such as wherein at least one quality measure is calculated locally for more than one part of the fundus image, and then optionally summed up to a global measure.
  • the acquisition quality measure may be calculated by any suitable measures, whereamong the following measures are preferred for the invention: the quality measure is selected from a contrast measure, a sharpness measure, an interlacing measure, a signal-to-noise ratio, a colour composition measure and an illumination measure.
  • the quality measure is selected from a contrast measure, a sharpness measure, an interlacing measure, a signal-to-noise ratio, a colour composition measure and an illumination measure.
  • important measures are measures capable of detecting too little or too high variation in the image, wherein little variation is often due to unsharp images, and high variation is often due to events in the image relating the artefacts.
  • a preferred quality measure is a contrast measure, wherein the contrast measure may be the variation in the gradient magnitude image, preferably the coefficient-of- variation of the gradient magnitude.
  • the contrast measure is preferred since it is sensible with respect to the variation of the image.
  • the contrast measure is preferably calculated robustly, preferably by iteratively discarding out-liers. Out-liers may be observations deviating more than a number of standard deviations from the mean on a log-scale, wherein said number preferably is in the range of from 1-10.
  • the contrast measure is a gradient contrast measure. This is a measure of the overall quality of the image, loosely speaking in terms of visibility of details in the image. Either poor gradient contrast may be related to a technical problem, such as wrong illumination of the retina, or it may be due to pathology, such as a cataract.
  • the gradient contrast measure correlates well with the human graders interpretation of quality of a fundus image, as will be illustrated later.
  • R and C are the number of rows and columns in the image.
  • the heuristic idea is that visually, as well as in the automatic lesion detection algorithm, visibility of local features in the image is related to gradient magnitude. Large variation in the gradient magnitude, and hence a large CV, indicates that there is a large difference between sections with small gradients ("background”) and sections with large gradients (sections with features such as vessels or the optic nerve head).
  • the CV may be calculated based on the original image, I, or on a function of the image I, such as a filtered image.
  • a filtered image I may be used, said filtered image I being produced by
  • W(r,c) is a window around pixel (r,c) .
  • An unsharp masked image may be produced as
  • the unsharp image does not contain general background variation.
  • the gradient contrast measure may then be calculated from the unsharp image.
  • a robust CV where outliers are iteratively removed, is preferably used.
  • the outliers are defined as observations deviating more than 4 standard deviations from the mean on a log-scale. This criterion is implemented on the original scale by assuming a lognormal distribution of the gradient magnitudes; thus the magnitudes are not actually log-transformed in the algorithm.
  • An iteration in the outlier-removal procedure is thus defined as follows: Let n denote the number of current observations not regarded as outliers, and let S and SS denote respectively the sum and squared sum of these. The minimal current observation L,dite is considered an outlier if
  • is the outlier-tolerance, preferably set to 4.0. If ⁇ min is an outlier, n , S and SS are updated and the next minimal observation is considered. If not, the maximal current observation ⁇ am is considered and is declared an outlier if
  • CV rob denote the robust CV-measure.
  • the CV rob -measure may be used to identify images of low gradient contrast.
  • the original CV measure may however provide additional information in order to identify images with unusually large contrast. These are usually images that should not be processed automatically for detection of structures or lesions; either because the retina contains pathologies like laser scars, because the image contains artifacts such as text-labels printed on the retinal section or because a non-fundus image (for instance an image of the eyeball) is erroneously passed to the algorithm. It has been found that a CV larger than about 1.3 indicates an unusually large contrast and a CV mb less than about 0.6 corresponds to images of low quality
  • Figures 1-4 display four images and their gradient contrast measures.
  • the top left image is acquired from a patient who has cataract, which is seen as a blurring of the image.
  • the algorithm will most likely fail on this image, and it should be excluded from automatic analysis; in fact, it should probably not be graded at all.
  • the robust CV is 0.54 and the image will thus be excluded with the 0.6-threshold proposed above.
  • the top right panel displays an image of excellent quality; the robust CV is 1.05.
  • the bottom left panel illustrates a case where the robust and un-robust CV's differ significantly.
  • the retina is poorly illuminated and the image should be discarded; the small bright reflections yield a large gradient variance, however, thus the un-robust CV is 0.71 and the image would be accepted based on this measure.
  • the gradients along the circumference of the reflections are removed, and the CV is 0.58, whence the image is excluded from analysis.
  • the bottom right image has unusually large CV's, which indicate that it is not a typical fundus image; in this cause, the large scars on the retina are the reason. As these will probably confuse the algorithm, the image should be returned for manual grading.
  • the robustness of the quality measure has been evaluated in a gradient image ⁇ rc ⁇ .
  • the gradient image can be obtained by different means; presently both polynomial and Gaussian gradient estimation are implemented in the lesion module, and the gradients will to some extent depend on the parameter settings of the methods.
  • the measure for a range of parameters for each of the four images displayed in Fig- ure 1-4 has been calculated. The results are listed in Table 1 and displayed as a plot in Figure 5.
  • the gradients are estimated using either a Gaussian kernel or polynomial kernels.
  • Gaus x refers to a Gaussian kernel with standard deviation ⁇ : units, wherein one unit is approximately 10 ⁇ m;
  • Poly x y refers to a polynomial filter with order : and kernel-size y pixels. Note that for the poly-kernels the kernel-size is not scaled with the image scale.
  • the original images are of scale 0.9, 0.4, 1.0 and 0.4 respectively, and the 0.4 images are resampled to scale 0.6 prior to the gradient estimation.
  • the CV's generally increase with the degree of smoothing (i.e. they increase with the width of the Gaussian kernel, and decrease with the poly-kernel-size), which may be explained by a general decrease in the mean gradient magnitude.
  • the CV's are reasonably robust when widths in the normal range 1.3 - 2.0 are used.
  • the polynomial kernels there is a more pronounced variation, which has no systematic pattern between the images.
  • the kernel-sizes are measured in pixels, and hence it is a bit difficult to compare the variation for images of different scales.
  • the CV's are somewhat sensitive to the choice of kernel-size and order. This may be related to the fundamental difference between the Gaussian and polynomial kernels.
  • the interlacing effect may preferably be calculated by measures derived from the Fourier transformation.
  • the interlacing measure is based on the ratio of the power at n predetermined frequencies in the column direction and the row direc- tion of the image, wherein n is an integer > 1.
  • this effect may be calculated by the ratio of the powers at frequency 1 ⁇ in the vertical and horizontal directions.
  • H carefully denote the discrete Fourier trans- formation of the image at frequencies (rlR,clC) pixel-sides '1 ,
  • h J k denotes the intensity in row
  • column k and R and C are the number of rows and columns respectively. Assuming that R and C are even (which is usu- ally the case), the interlacing ratio is then given by
  • the IL is larger than 1 since interlacing is typically seen as horizontal stripes in the image. Image of good quality have IL ratios up to about 10. The interlacing effects are clearly visible, when the IL ratio is higher than 20. In a few images data, every second line may be missing; these have an IL ratio in the order of 10000.
  • a quality measure selected from illumination measure is usually a local measure, which is used to identify regions that are too dark or too bright, due to inhomogene- ous illumination of the retina, but may of course be applied as a global measure as well, i.e. the whole image is poorly illuminated.
  • the aim is to use this both locally, for example for removing these areas prior to detecting lesions, and globally as a measure of the overall illumination quality.
  • the illumination quality of the image may be calculated by measuring the saturation of the image using at least one of the channels, preferably at least two of the channels, such as the red and the green channel of the image.
  • the quality is calculated using the saturation of the red and green channels,
  • alow saturation value is a quality measure for improper illumination of the image.
  • the blue channel is preferably omitted from the saturation-value, however, as this channel in general contains very little information. In the following, saturation will always refer to red/green-saturation in the above sense.
  • the saturation image is mean-filtered with kernel size 51 "units wherein one unit is approximately 10 ⁇ m" and normalized by the mean saturation within the ROI. This normalization ensures that only areas that have low saturation values relative to the global image saturation, are masked out. Improperly illuminated images, are image having sections of the image, which touch the boundary of the ROI, and have normalized saturation value less than about 0.55 are improper illuminated. These regions may be identified by growing regions from the boundary pixels in the saturation image. The total area of these sections, relative to the size of the ROI, constitutes the global illumination quality of the image. For automatic screening the image should be excluded from analysis if a large part of the retina, more than 40% for instance, is badly illuminated.
  • This is preferably a local measure, which is used to identify sections with low signal- to-noise ratio.
  • the aim is to use this both locally, for removing these areas prior to detecting lesions, and globally as a measure of the overall SNR quality.
  • the "signal” (i.e. the signal and the noise) is defined as un-sharp masking of the original image, with a kernel-size of for example 191 pixels in the imgaes, wherein one unit is approximately 10 ⁇ m. Thus, low-frequency components are removed from the image.
  • ⁇ S, ⁇ denote the resulting image.
  • the noise image ⁇ N,. ⁇ is defined by the residual image after subtracting the 3 ⁇ 3 median-filtered signal from ⁇ S, ⁇ .
  • the local signal-to-noise ratio is defined as where V t is a region of size 51 x 51 around pixel i , and S, is the average of S over
  • the S ⁇ R-image is finally smoothed with a mean-filter of for example size 51 x 51 to obtain the final S ⁇ R-image.
  • the noise level will locally be too high for automated analysis of the image.
  • either one may use the average S ⁇ R or the fraction of the retinal part of the image that has a S ⁇ R level lower than 1.3. The latter will often be most appropriate for robustness reasons.
  • the quality measure may include specific measures for identifying the artefacts, such as templates searching for drops of liquid or secretes that may be positioned on the camera lens, text lines inadvertently placed by the electronical system over the image giving rise to high variations may also be detected by templates or by searching for extremas in single colour channel image functions, spots due to camera artefacts, pixel errors due to camera or CCD errors, reflections that may arise from healthy individuals vitreous body and lead to bright areas around fovea and extensions therefrom.
  • specific measures for identifying the artefacts such as templates searching for drops of liquid or secretes that may be positioned on the camera lens, text lines inadvertently placed by the electronical system over the image giving rise to high variations may also be detected by templates or by searching for extremas in single colour channel image functions, spots due to camera artefacts, pixel errors due to camera or CCD errors, reflections that may arise from healthy individuals vitreous body and lead to bright areas around fovea and extensions therefrom.
  • artefacts according to this invention is any presentation on the image, not being part of the scene of the image.
  • artefacts may for example be one or more of the following: undesired objects projected onto the image, dispersion, diffraction and/or reflection of the optic system.
  • Examples of the undesired objects projected onto the image is selected from eye lash, or edge of the iris, text in the image, digital errors. Pathologies
  • Pathologies, indicators of which, may be quantified or qualified by the present invention are generally pathologies relating to the parts of the eye lying in front of the fundus itself. However, some pathologies may also be present in the fundus region to such an extent that it incluences the overall quality.
  • pathology influencing the quality in a systematic manner may be quantified or qualified by calculating a quality measure and correlating it with standards.
  • the pathology indicators may be indicators for one or more of the following pathologies: glaucoma, diabetic retinopathy, amotio retina, hemorrhages of the retina, cataract, scars, photo-coagulation scars, laser scars, pathological vessel growth.
  • the quality measures comprise calculating of at least a contrast measure and an interlacing measure.
  • Calculation of a quality measure and estimation of a gradability measure may be used for identifying regions in the image, that should be masked before detecting the presence or absence of a structure and/or a pathological condition in parts of a fundus image, said method comprising
  • the method is preferably combined with methods for detecting the specific structures, such as vessels, the optic nerve head and the fovea, and detecting lesions, such as microaneurysms and exudates, which show up on fundus images as generally "dot shaped" (i..e substantially circular) areas. It is of interest to distinguish between such microaneurysms and exudates, and further to distinguish them from other pathologies in the image, such as "cotton wool spots" and hemorrhages.
  • the quality measure may be used in a method for registering at least two fundus images from the same eye, comprising
  • the present invention relates to a method for selecting fundus images for automatic screening, comprising d) acquiring a fundus image,
  • images with poor quality should be returned to the user with a message that the image could not be processed.
  • Many images, which have poor gradient contrast, are also judged to be ungradeable by the human grader, and by returning the images immediately, the algorithms will treat data much as a human grader would. It is within the scope of the invention that the quality measure is conducted immediately after the recording of the image, to let the photographer acquire a new image if the first one was unacceptable.
  • the invention also relates to a method for detecting the presence or absence of a structure and/or a pathological condition in parts of a fundus image taking into account the quality measure and gradability measure, said method comprising
  • the invention further relates to a system comprising the algorithms capable of per- forming the methods according to the invention.
  • the system according to the invention may be any system capable of conducting the method as described above as well as any combinations thereof within the scope of the invention.
  • the system may include algorithms to perform any of the methods de- scribed above.
  • a graphical user interface module may operate in conjunction with a display screen of a display monitor.
  • the graphical user interface may be implemented as part of the processing system to receive input data and commands from a conventional key- board and mouse through an interface and display results on a display monitor.
  • many components of a conventional computer system have not been discussed such as address buffers, memory buffers, and other standard control circuits because these elements are well known in the art and a detailed description thereof is not necessary for understanding the present invention.
  • Pre-acquired image data can be fed directly into the processing system through a network interface and stored locally on a mass storage device and/or in a memory. Furthermore, image data may also be supplied over a network, through a portable mass storage medium such as a removable hard disk, optical disks, tape drives, or any other type of data transfer and/or storage devices which are known in the art.
  • a parallel computer platform having multiple processors is also a suitable hardware platform for use with a system according to the present invention.
  • Such a configuration may include, but not be limited to, par- allel machines and workstations with multiple processors.
  • the processing system can be a single computer, or several computers can be connected through a communications network to create a logical processing system.
  • the present system allows the grader, that is the person normally grading the images to identify the structures and lesions more rapidly and securely.
  • the system described in the following is a more reliable system, wherein it is also possible to arrange for acquisition of the images at one location and examining them at another location.
  • the images may be recorded by any optician or physician or elsewhere and be transported to the examining specialist, either as photos or the like or on digital media. Accordingly, by use of the present system the need for decentral centers for recording the image, while the maintaining fewer expert graders could be realised.
  • the network may carry data signals including control or image adjustment signals by which the expert examining the im- ages at the examining unit directly controls the image acquisition occurring at the recordation localisation, i.e. the acquisition unit.
  • control signals such command signals as zoom magnification, steering adjustments, and wavelength of field illumination may be selectively varied remotely to achieve desired imaging effect.
  • questionable tissue structures requiring greater magnification or a different perspective for their elucidation may be quickly resolved without ambiguity by varying such control parameters.
  • by switching illumination wavelengths views may be selectively taken to represent different layers of tissue, or to accentuate imaging of the vasculature and blood flow characteristics.
  • control signals may include time varying signals to initiate stimulation with certain wavelengths of light, to initiate imaging at certain times after stimulation or delivery of dye or drugs, or other such precisely controlled imaging protocols.
  • the digital data signals for these operations may be interfaced to the ophthalmic equipment in a relatively straightforward fashion, provided such equipment already has initiating switches or internal digital circuitry for controlling the particular parameters involved, or is capable of readily adapting electric controls to such control parameters as system focus, illumination and the like.
  • the imaging and ophthalmic treatment in- strumentation in this case will generally include a steering and stabilization system which maintains both instruments in alignment and stabilized on the structures appearing in the field of view.
  • the invention contemplates that the system control further includes image identification and correlation software which allows the ophthalmologist at site to identify particular positions in the retinal field of view, such as pinpointing particular vessels or tissue structures, and the image acquisition computer includes image recognition software which enables it to identify patterns in the video frames and correlate the identified position with each image frame as it is acquired at the acquisition site.
  • the image recognition software may lock onto a pattern of retinal vessels.
  • the invention further contemplates that the images provided by acquisition unit are processed for photogrammetric analysis of tissue features and optionally blood flow characteristics. This may be accomplished as follows. An image acquired at the recordation unit is sent to an examination unit, where it is displayed on the screen. As indicated schematically in the figure, such image may include a network of blood vessels having various diameters and lengths. These vessels include both arterial and venous capillaries constituting the blood supply and return network.
  • the workstation may be equipped with a photogrammetric measurement program which for example may enable the technician to place a cursor on an imaged vessel, and moving the cursor along the vessel while clicking, have the software automatically determine the width of the vessel and the subvessels to which it is connected, as well as the coordinates thereof.
  • a photogrammetric measurement program which for example may enable the technician to place a cursor on an imaged vessel, and moving the cursor along the vessel while clicking, have the software automatically determine the width of the vessel and the subvessels to which it is connected, as well as the coordinates thereof.
  • the software for noting coordinates from the pixel positions and linking displayed features in a record, as well as submodules which determine vessel capacities and the like, are straightforward and readily built up from photogrammetric program techniques.
  • Work station protocols may also be implemented to automatically map the vasculature as described above, or to compare two images taken at historically different times and identify or annotate the changes which have occurred, highlighting for the operator features such as vessel erosion, tissue which has changed colour, or other differences.
  • a user graphical interface allows the specialist to type in diagnostic indications linked to the image, or to a particular feature appearing at a location in the image, so that the image or processed version of it becomes more useful.
  • the relative health of the vessel, its blood carrying capacity and the like may also be visually observed and noted.
  • This photogrammetric analysis allows a road map of the vasculature and its capacity to be compiled, together with annotations as to the extent of tissue health or disease apparent upon such inspection.
  • a very precise and well-annotated medical record may be readily compiled and may be compared to a previously taken view for detailed evidence of changes over a period of time, or may be compared, for example, to immediately preceding angiographic views in order to assess the actual degree of blood flow occurring therein.
  • the measurement entries at examination unit become an annotated image record and are stored in the central library as part of the patient's record.
  • the present invention changes the dynamics of patient access to care, and the efficiency of delivery of ophthalmic expertise in a manner that solves an enormous current health care dilemma, namely, the obstacle to proper universal screening for diabetic retinopathy.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Theoretical Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Quality & Reliability (AREA)
  • General Physics & Mathematics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Signal Processing (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The present invention relates to a method for determining the quality of a fundus image, the use of said quality for determining pathologies and/or artefacts in the image, and methods for handling said image, as well as a system comprising algorithms performing the methods. The quality measure may be global for the whole image or a local measure. The quality measure may be selected from contrast measure, a sharpness measure, an interlacing measure, a signal-to-noise ratio, a colour composition measure and an illumination measure.

Description

Quality measure
The present invention relates to a method for determining the quality of a fundus image, the use of said quality for determining pathologies and/or artefacts in the image, and methods for handling said image, as well as a system comprising algorithms performing the methods.
Background
Fundus image analysis presents several challenges, such as high image variability, the need for reliable processing in the face of nonideal imaging conditions and short computation deadlines. Large variability is observed between different patients - even if healthy, with the situation worsening when pathologies exist. For the same patient, variability is observed under differing imaging conditions and during the course of a treatment or simply a long period of time. Besides, fundus images are often characterized by having a limited quality, being subject to improper illumination, glare, fadeout, loss of focus and artifacts arising from reflection, refraction, and dispersion.
Diabetes is the leading cause of blindness in working age adults. It is a disease that, among its many symptoms, includes a progressive impairment of the peripheral vascular system. These changes in the vasculature of the retina cause progressive vision impairment and eventually complete loss of sight. The tragedy of diabetic reti- nopathy is that in the vast majority of cases, blindness is preventable by early diagnosis and treatment, but screening programs that could provide early detection are not widespread.
Promising techniques for early detection of diabetic retinopathy presently exist. Re- searchers have found that retinopathy is preceded by visibly detectable changes in blood flow through the retina. Diagnostic techniques now exist that grade and classify diabetic retinopathy, and together with a series of retinal images taken at different times, these provide a methodology for the early detection of degeneration. Various medical, surgical and dietary interventions may then prevent the disease from progressing to blindness. Despite the existing techniques for preventing diabetic blindness, only a small fraction of the afflicted population receives timely and proper care, and significant barriers separate most patients from state-of-the art diabetes eye care. There are a lim- ited number of ophthalmologists trained to evaluate retinopathy, and most are located in population centers. Many patients cannot afford the costs or the time for travel to a specialist. Additionally, cultural and language barriers often prevent elderly, rural and ethnic minority patients from seeking proper care. Moreover, because diabetes is a persistent disease and diabetic retinopathy is a degenerative disease, an afflicted patient requires lifelong disease management, including periodic examinations to monitor and record the condition of the retina, and sustained attention on the part of the patient to medical or behavioral guidelines. Such a sustained level of personal responsibility requires a high degree of motivation, and lifelong disease management can be a significant lifestyle burden. These factors increase the likeli- hood that the patient will, at least at some point, fail to receive proper disease management, often with catastrophic consequences.
Accordingly, it would be desirable to implement more widespread screening for retinal degeneration or pathology, and to positively address the financial, social and cultural barriers to implementation of such screening. It would also be desirable to improve the efficiency and quality of retinal evaluation.
Hence, a precise knowledge of both localisation and orientations of the strucutures of the fundus is important, including the localisation of the vessels. Currently, ex- amination of fundus images is carried out principally by a clinician examining each image "manually". This is not only very time-consuming, since even an experienced clinician can take several minutes to assess a single image, but is also prone to error since there can be inconsistencies between the way in which different clinicians assess a given image. It is therefore desirable to provide ways of automating the process of the analysis of fundus images, using computerised image analysis, so as to provide at least preliminary screening information and also as an aid to diagnosis to assist the clinician in the analysis of difficult cases.
Next, it is generally desirable to provide a method of determining accurately, using computerised image analysis techniques, the position of both the papilla (the point of exit of the optic nerve) and the fovea (the region at the centre of the retina, where the retina is most sensitive to light), as well vessels of the fundus.
Summary of the invention
Image quality is an important parameter in automated fundus image analysis systems, as algorithms for estimating vessel geometry for detecting the optic nerve head and for lesion detection are often developed and validated using images of a reasonable quality. When images of poor quality are passed to the system, the algo- rithms may return erroneous results, which may have critical consequences, especially in an automatic screening scenario.
The present inventors have been able to calculate a quality measure that apart from being used in selecting images for automatic screening also may provide an indica- tor for various pathologies and artefacts of the image, since it has been found, that there is a systematic correlation between the image quality and some pathologies in or about the eye.
Accordingly, the present invention relates to an automatic method for quantifying and/or qualifying pathology indicators and/or artefacts of a fundus image or of a part of a fundus image, comprising
a) calculating at least one acquisition quality measure of the image,
b) comparing the quality measure calculated with standard quality measures for the pathogoly indicators and artefacts to be quantified and/or qualified, and
c) quantifying and/or qualifying pathology indicators and/or artefacts of the fundus image.
Furthermore, the present invention relates to a quality measure as such, the use of said quality measure for example being in selecting images for automatic screening procedures. Accordingly, the present invention further relates to a method for quantifying the quality of a fundus image, comprising
a) calculating at least one quality measure, wherein said quality measure is selected from a contrast measure, a sharpness measure, an interlacing measure, a signal-to-noise ratio, a colour composition measure and an illumination measure.
b) quantifiying the quality of the fundus image based on the quality measure calculated.
The quality measure may be used as a tool in a method for detecting structure and pathologies in a fundus image, such as a method for automatically detecting the presence or absence of a structure and/or a pathological condition in parts of a fun- dus image, comprising
a) calculating a quality measure of said fundus image parts,
b) estimating the gradability measure of said fundus image,
c) masking parts of said fundus image having a quality measure and/or a gradability measure below a predetermined threshold,
d) detecting the presence or absence of a structure and/or pathological condi- tion in said fundus image.
The gradability measure relates to an overall measure of events in the acquisition route of the image as well as events in the image as such, wherein events leading to a low gradability measure may be hemorrhages in retina and/or large scars, wherein the events are of a size that interfluence on the detection of the structures and other pathologies.
In order to examine the fundus region properly, registering or mounting of the images in a continuous manner with respect to the structures in the image, such as for example by arranging the images so that the vessels correctly continue in the im- ages. Also, the quality measure may be used as a criteria when automatically registering the images, in that poor quality images should be rejected and replaced by better quality images if possible.
A method for registering at least two fundus images from the same eye, comprising
a) calculating an acquisition quality measure of said fundus image,
b) selecting images having an acquisition quality measure above a predeter- mined threshold, and
c) registering the fundus images.
Furthermore, an important aspect of the present invention is a method for selecting fundus images for automatic screening because fundus images of a poor quality may lead to false positive or false negative detections in the image, due to disturbances of the algorithms because of the representation of the poor quality on the images.
Thus, the present invention further relates to a method for selecting fundus images for automatic screening, comprising
a) acquiring a fundus image,
b) calculating an acquisition quality measure of said fundus image,
c) optionally estimating a gradability measure of said fundus image,
d) classifying the image fundus quality measure with respect to a predeter- mined threshold, and
e) selecting for automatic screening a fundus image having the quality measure and optionally the gradability measure above the predetermined threshold. The invention further relates to a system comprising the algorithms capable of performing the methods according to the invention.
Drawings
Figure 1 -Figure 4: Four fundus images and their gradient contrast measures. The CV measure is the coefficient of variation of the gradient magnitudes in the image, and is a measure of the overall quality of the image. The robust CV measure is the CV of the gradients where outliers are removed.
Figure 1: Blurred image
CV: 0.551071 CVmb : 0.539476
Figure 2: Good quality
CV: 1.050181 CVtob : 1.049217
Figure 3: Poorly illumination
CV: 0.713806 CVmb :0.576130
Figure 4: Large scars
CV: 1.437520 CVmb : 1.423786
Figure 5: The robust CV of gradient magnitudes as a function of the gradient esti- mation method for the four fundus images displayed in Figure 1-4.
Definitions
Fovea: The term is used in its normal anatomical meaning, i.e. the spot in retina having a great concentration of cones giving rise to the vision. Fovea and the term "macula lutea" are used as synonyms.
Image: The term image is used to describe a representation of the region to be examined, i.e. the term image includes 1-dimensional representations, 2-dimensional representations, 3-dimensionals representations as well as n-dimensional representatives. Thus, the term image includes a volume of the region, a matrix of the region as well as an array of information of the region.
Optic nerve head: The term is used in its normal anatomical meaning, i.e. the area in the fundus of the eye where the optic nerve enters the retina. Synonyms for the area are for example, the "blind" spot, the papilla, or the optic disc.
Red-green-blue image: The term relates to the image having the red channel, the green channel and the blue channel, also called the RBG image.
ROI: Region of interest.
Visibility: The term visibility is used in the normal meaning of the word, i.e. how visi- ble a lesion or a structure of the fundus region is compared to background and other structures/lesions.
Detailed description of the invention
Images
The images of the present invention may be any sort of images and presentations of the region of interest. Fundus image is a conventional tool for examining retina and may be recorded on any suitable means. In one embodiment the image is presented on a medium selected from dias, paper photos or digital photos. However, the image may be any other kind of representation, such as a presentation on an array of photo receptor elements, for example a CCD.
The image may be a grey-toned image or a colour image, in a preferred embodi- ment the image is a colour image.
Quality measure
The quality measure according to the present invention is an acquisition quality measure, i.e. a measure of the quality of the image related to the optical and elec- tronical parts of the acquisition. Thus, the acquisition quality relates to the optical way that is the route from in front of the fundus, such as from the vitreous body until the optical means of the acquisition apparatus, being it a camera or a CCD or the like. Thus, the optic system may be any part of the optic system from vitreous body, lens, cornea and camera or recorder.
The electronical way relates to the route of the image in the camera, CCD or the like and into the computer capable of automatically measuring the quality.
The quality measure is often a global measure in the meaning that the quality of the image as a total is given. It is however also possible to detect the image quality locally, for example for parts of the image. In particular for local quality events, such as locally presented artefacts this may be an advantage, since it may lead to rejection of for example only a part of the image during automatic screening and detec- tion of the rest of the image. For example a local quality measure may be denoted to more than one part of the image, and the image may then have different quality measures for different parts of the image.
Also, the quality measure may be calculated for parts of the image and give rise to a global measure, such as wherein at least one quality measure is calculated locally for more than one part of the fundus image, and then optionally summed up to a global measure.
The acquisition quality measure may be calculated by any suitable measures, whereamong the following measures are preferred for the invention: the quality measure is selected from a contrast measure, a sharpness measure, an interlacing measure, a signal-to-noise ratio, a colour composition measure and an illumination measure. However, the person skilled in the art may select other measures being descriptive for the quality, wherein important measures are measures capable of detecting too little or too high variation in the image, wherein little variation is often due to unsharp images, and high variation is often due to events in the image relating the artefacts.
A preferred quality measure is a contrast measure, wherein the contrast measure may be the variation in the gradient magnitude image, preferably the coefficient-of- variation of the gradient magnitude. The contrast measure is preferred since it is sensible with respect to the variation of the image. The contrast measure is preferably calculated robustly, preferably by iteratively discarding out-liers. Out-liers may be observations deviating more than a number of standard deviations from the mean on a log-scale, wherein said number preferably is in the range of from 1-10.
Gradient contrast measure
In a preferred embodiment the contrast measure is a gradient contrast measure. This is a measure of the overall quality of the image, loosely speaking in terms of visibility of details in the image. Either poor gradient contrast may be related to a technical problem, such as wrong illumination of the retina, or it may be due to pathology, such as a cataract. The gradient contrast measure correlates well with the human graders interpretation of quality of a fundus image, as will be illustrated later.
Let
Figure imgf000010_0001
denote the magnitude of the gradients in the image, the gradient contrast measure is then given by the coefficient of variation (CV) of these,
cv λ
Figure imgf000010_0002
Here R and C are the number of rows and columns in the image. The heuristic idea is that visually, as well as in the automatic lesion detection algorithm, visibility of local features in the image is related to gradient magnitude. Large variation in the gradient magnitude, and hence a large CV, indicates that there is a large difference between sections with small gradients ("background") and sections with large gradients (sections with features such as vessels or the optic nerve head).
The CV may be calculated based on the original image, I, or on a function of the image I, such as a filtered image. Thus, from the original image, I, a gradient image Λ = {λrc} may be calculated and used for calculating a robust CV. In another embodiment, a filtered image I may be used, said filtered image I being produced by
I(r,c) = ∑I(r',C),
I rr r, C) | (r c-)zW(r,c)
where W(r,c) is a window around pixel (r,c) . An unsharp masked image may be produced as
unsharp »
The unsharp image does not contain general background variation. The gradient contrast measure may then be calculated from the unsharp image.
Often a robust CV is more suitable, particularly when the image contains small re- flections, which should not contribute to the contrast measure. A robust CV, where outliers are iteratively removed, is preferably used. In one embodiment the outliers are defined as observations deviating more than 4 standard deviations from the mean on a log-scale. This criterion is implemented on the original scale by assuming a lognormal distribution of the gradient magnitudes; thus the magnitudes are not actually log-transformed in the algorithm. An iteration in the outlier-removal procedure is thus defined as follows: Let n denote the number of current observations not regarded as outliers, and let S and SS denote respectively the sum and squared sum of these. The minimal current observation L,„ is considered an outlier if
Figure imgf000011_0001
where σ is the outlier-tolerance, preferably set to 4.0. If λmin is an outlier, n , S and SS are updated and the next minimal observation is considered. If not, the maximal current observation λam is considered and is declared an outlier if
Anax
Figure imgf000011_0002
If this is the case, n , S and SS are updated, and the procedure starts over; if not, the iterative procedure ends. We will let CVrob denote the robust CV-measure. The CVrob -measure may be used to identify images of low gradient contrast. The original CV measure may however provide additional information in order to identify images with unusually large contrast. These are usually images that should not be processed automatically for detection of structures or lesions; either because the retina contains pathologies like laser scars, because the image contains artifacts such as text-labels printed on the retinal section or because a non-fundus image (for instance an image of the eyeball) is erroneously passed to the algorithm. It has been found that a CV larger than about 1.3 indicates an unusually large contrast and a CVmb less than about 0.6 corresponds to images of low quality
As an example, Figures 1-4 display four images and their gradient contrast measures. The top left image is acquired from a patient who has cataract, which is seen as a blurring of the image. The algorithm will most likely fail on this image, and it should be excluded from automatic analysis; in fact, it should probably not be graded at all. The robust CV is 0.54 and the image will thus be excluded with the 0.6-threshold proposed above. The top right panel displays an image of excellent quality; the robust CV is 1.05. The bottom left panel illustrates a case where the robust and un-robust CV's differ significantly. Here the retina is poorly illuminated and the image should be discarded; the small bright reflections yield a large gradient variance, however, thus the un-robust CV is 0.71 and the image would be accepted based on this measure. In the robust procedure, the gradients along the circumference of the reflections are removed, and the CV is 0.58, whence the image is excluded from analysis. Finally, the bottom right image has unusually large CV's, which indicate that it is not a typical fundus image; in this cause, the large scars on the retina are the reason. As these will probably confuse the algorithm, the image should be returned for manual grading.
It has been found that images with low gradient contrast are generally dark or blurred, and these will receive a poor quality-score when evaluated by a human grader. In order to verify this assumption, the gradient contrast measure has been compared with the image-quality scores assigned by two human graders, Goddard and Taylor. The graders quantified the quality of the nasal and macular image simultaneously by assigning a score of 0, 1 or 2 to each eye, where 0 is best and 2 is worst. The average score of the two graders has been compared with gradient con- trast images, and the observations support the claim, that gradient contrast is a sensible measure of "overall quality" of a fundus image.
The robustness of the quality measure has been evaluated in a gradient image {λrc } . The gradient image can be obtained by different means; presently both polynomial and Gaussian gradient estimation are implemented in the lesion module, and the gradients will to some extent depend on the parameter settings of the methods. In order to study how the CV measure depends on the gradient estimation method, the measure for a range of parameters for each of the four images displayed in Fig- ure 1-4 has been calculated. The results are listed in Table 1 and displayed as a plot in Figure 5.
Table 1. The robust CV of gradient magnitudes for the four images displayed in Figure 1. The gradients are estimated using either a Gaussian kernel or polynomial kernels. "Gauss x" refers to a Gaussian kernel with standard deviation Λ: units, wherein one unit is approximately 10 μm; "Poly x y " refers to a polynomial filter with order : and kernel-size y pixels. Note that for the poly-kernels the kernel-size is not scaled with the image scale. The original images are of scale 0.9, 0.4, 1.0 and 0.4 respectively, and the 0.4 images are resampled to scale 0.6 prior to the gradient estimation.
Gradient met-f03300s3062s2 f10441D006 2
" f1085s8670f2 f11759S020 4R hod 1 R
Gauss 1.3 0.52 1.37 1.00 0.58
Gauss 1.4 0.53 1.39 1.02 0.58
Gauss 1.5 0.53 1.41 1.04 0.58
Gauss 1.6 0.54 1.42 1.05 0.57
Gauss 1.7 0.55 1.44 1.06 0.58
Gauss 1.8 0.56 1.45 1.07 0.58
Gauss 1.9 0.57 1.46 1.07 0.58
Gauss 2.0 0.58 1.47 1.08 0.58
Gauss 2.5 0.64 1.49 1.08 0.62
Gauss 3.0 0.71 1.49 1.08 0.71
Gauss 3.5 0.75 1.48 1.06 0.89
Poly 4 15 0.54 1.51 1.04 0.61
Poly 4 11 0.52 1.47 0.95 0.57
Poly 4 9 0.52 1.41 0.88 0.58
Poly 2 15 0.64 1.47 1.08 0.95
Poly 2 11 0.56 1.49 1.06 0.68
Poly 2 9 0.54 1.48 1.02 0.61
The CV's generally increase with the degree of smoothing (i.e. they increase with the width of the Gaussian kernel, and decrease with the poly-kernel-size), which may be explained by a general decrease in the mean gradient magnitude. For the Gaussian kernel, the CV's are reasonably robust when widths in the normal range 1.3 - 2.0 are used. For the polynomial kernels, there is a more pronounced variation, which has no systematic pattern between the images. One should recall that the kernel-sizes are measured in pixels, and hence it is a bit difficult to compare the variation for images of different scales. Still it is clear that the CV's are somewhat sensitive to the choice of kernel-size and order. This may be related to the fundamental difference between the Gaussian and polynomial kernels.
From a practical point of view, the most robust strategy currently is to use Gaussian kernels with standard deviation within the normal range (1.3- 2.0).
Interlacing
This is a measure of the overall amount of interlacing in the image. Interlacing ef- fects arises from technical problems with the camera or scanner.
The interlacing effect may preferably be calculated by measures derived from the Fourier transformation. In particular the interlacing measure is based on the ratio of the power at n predetermined frequencies in the column direction and the row direc- tion of the image, wherein n is an integer > 1.
For example this effect may be calculated by the ratio of the powers at frequency 1Λ in the vertical and horizontal directions. Let H, „ denote the discrete Fourier trans- formation of the image at frequencies (rlR,clC) pixel-sides'1,
R-\ C-\
^ = Σ∑ ^"+WC)- r = 0,...,R - l,c = 0,... ,C - l. j=0 *=0
Here hJ k denotes the intensity in row , column k , and R and C are the number of rows and columns respectively. Assuming that R and C are even (which is usu- ally the case), the interlacing ratio is then given by
Figure imgf000016_0001
When calculating the IL-ratio in practice the Parseval's Theorem is used to write it as
1 c-' ~
IL = <- c=0
1 *-> ~
R ^ r=0 '■• where
Figure imgf000016_0002
and hr . is defined equivalently for the rows.
Usually the IL is larger than 1 since interlacing is typically seen as horizontal stripes in the image. Image of good quality have IL ratios up to about 10. The interlacing effects are clearly visible, when the IL ratio is higher than 20. In a few images data, every second line may be missing; these have an IL ratio in the order of 10000.
Illumination
A quality measure selected from illumination measure is usually a local measure, which is used to identify regions that are too dark or too bright, due to inhomogene- ous illumination of the retina, but may of course be applied as a global measure as well, i.e. the whole image is poorly illuminated. The aim is to use this both locally, for example for removing these areas prior to detecting lesions, and globally as a measure of the overall illumination quality.
The illumination quality of the image may be calculated by measuring the saturation of the image using at least one of the channels, preferably at least two of the channels, such as the red and the green channel of the image. Preferably the quality is calculated using the saturation of the red and green channels,
Sections which are very dark or bright tend to have little colour information, and hence a low saturation value. Thus, alow saturation value is a quality measure for improper illumination of the image. The blue channel is preferably omitted from the saturation-value, however, as this channel in general contains very little information. In the following, saturation will always refer to red/green-saturation in the above sense.
The saturation image is mean-filtered with kernel size 51 "units wherein one unit is approximately 10 μm" and normalized by the mean saturation within the ROI. This normalization ensures that only areas that have low saturation values relative to the global image saturation, are masked out. Improperly illuminated images, are image having sections of the image, which touch the boundary of the ROI, and have normalized saturation value less than about 0.55 are improper illuminated. These regions may be identified by growing regions from the boundary pixels in the saturation image. The total area of these sections, relative to the size of the ROI, constitutes the global illumination quality of the image. For automatic screening the image should be excluded from analysis if a large part of the retina, more than 40% for instance, is badly illuminated.
Signal to noise ratio
This is preferably a local measure, which is used to identify sections with low signal- to-noise ratio. The aim is to use this both locally, for removing these areas prior to detecting lesions, and globally as a measure of the overall SNR quality.
The "signal" (i.e. the signal and the noise) is defined as un-sharp masking of the original image, with a kernel-size of for example 191 pixels in the imgaes, wherein one unit is approximately 10 μm. Thus, low-frequency components are removed from the image. Let {S, } denote the resulting image. The noise image {N,.} is defined by the residual image after subtracting the 3χ3 median-filtered signal from {S, } . The local signal-to-noise ratio is defined as
Figure imgf000018_0001
where Vt is a region of size 51 x 51 around pixel i , and S, is the average of S over
V, . The SΝR-image is finally smoothed with a mean-filter of for example size 51 x 51 to obtain the final SΝR-image.
In general, if the smoothed SΝR is less than about 1.3, the noise level will locally be too high for automated analysis of the image. As a global measure of the noise level, either one may use the average SΝR or the fraction of the retinal part of the image that has a SΝR level lower than 1.3. The latter will often be most appropriate for robustness reasons.
Artefacts
In order to measure for artefacts relating to the optical and/or electronical route of acquiring the image the quality measure may include specific measures for identifying the artefacts, such as templates searching for drops of liquid or secretes that may be positioned on the camera lens, text lines inadvertently placed by the electronical system over the image giving rise to high variations may also be detected by templates or by searching for extremas in single colour channel image functions, spots due to camera artefacts, pixel errors due to camera or CCD errors, reflections that may arise from healthy individuals vitreous body and lead to bright areas around fovea and extensions therefrom.
The artefacts according to this invention is any presentation on the image, not being part of the scene of the image. Thus, artefacts may for example be one or more of the following: undesired objects projected onto the image, dispersion, diffraction and/or reflection of the optic system.
Examples of the undesired objects projected onto the image is selected from eye lash, or edge of the iris, text in the image, digital errors. Pathologies
Pathologies, indicators of which, may be quantified or qualified by the present invention are generally pathologies relating to the parts of the eye lying in front of the fundus itself. However, some pathologies may also be present in the fundus region to such an extent that it incluences the overall quality.
Any pathology influencing the quality in a systematic manner may be quantified or qualified by calculating a quality measure and correlating it with standards. In par- ticular the pathology indicators may be indicators for one or more of the following pathologies: glaucoma, diabetic retinopathy, amotio retina, hemorrhages of the retina, cataract, scars, photo-coagulation scars, laser scars, pathological vessel growth.
Number of quality measures
In order to be able to quantify or qualify a series of pathologies and artefacts it is preferred to use more than one quality measure for each of the images. Therefore, it is preferred that at least two quality measures are calculated, preferably at least three quality measures are calculated. In a preferred embodiment the quality measures comprise calculating of at least a contrast measure and an interlacing measure.
Applications
The following is an non-limiting description of some of the applications for using the quality measure.
Calculation of a quality measure and estimation of a gradability measure may be used for identifying regions in the image, that should be masked before detecting the presence or absence of a structure and/or a pathological condition in parts of a fundus image, said method comprising
a) calculating a quality measure of said fundus image parts,
b) estimating the gradability measure of said fundus image, c) masking parts of said fundus image having a quality measure and/or a gradability measure below a predetermined threshold,
d) detecting the presence or absence of a structure and/or pathological condition in said fundus image.
The method is preferably combined with methods for detecting the specific structures, such as vessels, the optic nerve head and the fovea, and detecting lesions, such as microaneurysms and exudates, which show up on fundus images as generally "dot shaped" (i..e substantially circular) areas. It is of interest to distinguish between such microaneurysms and exudates, and further to distinguish them from other pathologies in the image, such as "cotton wool spots" and hemorrhages.
Methods and algorithms for detecting the structures and lesions are known to the person skilled in the art. In a preferred embodiment the methods are the methods described in co-pending PCT patent applications entitled "Detection of optic nerve head in a fundus image", "Assessment of lesions in an image" and "Detection of vessels in an image", all by RETINAL YZE A/S.
Also, as discussed above the quality measure may be used in a method for registering at least two fundus images from the same eye, comprising
a) calculating an acquisition quality measure of said fundus image,
b) selecting images having an acquisition quality measure above a predetermined threshold, and
c) registering the fundus images.
A very important use of the quality measures is for selecting fundus image for automatic screening, so that images having a poor quality are rejected from automatic screening. Thus, the present invention relates to a method for selecting fundus images for automatic screening, comprising d) acquiring a fundus image,
e) calculating a acquisition quality measure of said fundus image,
f) optionally estimating a gradability measure of said fundus image,
g) classifying the image fundus quality measure with respect to a predetermined threshold, and
h) selecting for automatic screening a fundus image having the quality measure and optionally the gradability measure above the predetermined threshold.
Rather than being processed automatically, images with poor quality should be returned to the user with a message that the image could not be processed. Many images, which have poor gradient contrast, are also judged to be ungradeable by the human grader, and by returning the images immediately, the algorithms will treat data much as a human grader would. It is within the scope of the invention that the quality measure is conducted immediately after the recording of the image, to let the photographer acquire a new image if the first one was unacceptable.
The invention also relates to a method for detecting the presence or absence of a structure and/or a pathological condition in parts of a fundus image taking into account the quality measure and gradability measure, said method comprising
a) acquiring a fundus image,
b) calculating a acquisition quality measure of said fundus image,
c) optionally estimating a gradability measure of said fundus image,
d) detecting the structures and/or pathological condition
e) weighting the detected structure and/or pathological condition with the quality measure and/or gradability measure, and f) obtaining a weighted estimate for the presence or absence of various structures and pathological conditions in the fundus image.
The invention further relates to a system comprising the algorithms capable of per- forming the methods according to the invention. Thus, the system according to the invention may be any system capable of conducting the method as described above as well as any combinations thereof within the scope of the invention.
Accordingly, the system may include algorithms to perform any of the methods de- scribed above.
A graphical user interface module may operate in conjunction with a display screen of a display monitor. The graphical user interface may be implemented as part of the processing system to receive input data and commands from a conventional key- board and mouse through an interface and display results on a display monitor. For simplicity of the explanation, many components of a conventional computer system have not been discussed such as address buffers, memory buffers, and other standard control circuits because these elements are well known in the art and a detailed description thereof is not necessary for understanding the present invention.
Pre-acquired image data can be fed directly into the processing system through a network interface and stored locally on a mass storage device and/or in a memory. Furthermore, image data may also be supplied over a network, through a portable mass storage medium such as a removable hard disk, optical disks, tape drives, or any other type of data transfer and/or storage devices which are known in the art.
One skilled in the art will recognize that a parallel computer platform having multiple processors is also a suitable hardware platform for use with a system according to the present invention. Such a configuration may include, but not be limited to, par- allel machines and workstations with multiple processors. The processing system can be a single computer, or several computers can be connected through a communications network to create a logical processing system.
Any of the algorithms of the systems described above may be adapted to the vari- ous variations of the methods described above. The present system allows the grader, that is the person normally grading the images to identify the structures and lesions more rapidly and securely. Thus, by using the quality measure the system described in the following is a more reliable system, wherein it is also possible to arrange for acquisition of the images at one location and examining them at another location. For example the images may be recorded by any optician or physician or elsewhere and be transported to the examining specialist, either as photos or the like or on digital media. Accordingly, by use of the present system the need for decentral centers for recording the image, while the maintaining fewer expert graders could be realised.
Furthermore, in addition to the communication of images and medical information between persons involved in the procedure, the network may carry data signals including control or image adjustment signals by which the expert examining the im- ages at the examining unit directly controls the image acquisition occurring at the recordation localisation, i.e. the acquisition unit. In particular, such command signals as zoom magnification, steering adjustments, and wavelength of field illumination may be selectively varied remotely to achieve desired imaging effect. Thus, questionable tissue structures requiring greater magnification or a different perspective for their elucidation may be quickly resolved without ambiguity by varying such control parameters. Furthermore, by switching illumination wavelengths views may be selectively taken to represent different layers of tissue, or to accentuate imaging of the vasculature and blood flow characteristics. In addition, where a specialized study such as fluorescence imaging is undertaken, the control signals may include time varying signals to initiate stimulation with certain wavelengths of light, to initiate imaging at certain times after stimulation or delivery of dye or drugs, or other such precisely controlled imaging protocols. The digital data signals for these operations may be interfaced to the ophthalmic equipment in a relatively straightforward fashion, provided such equipment already has initiating switches or internal digital circuitry for controlling the particular parameters involved, or is capable of readily adapting electric controls to such control parameters as system focus, illumination and the like.
Also, the examining expert could be able to exert some treatment in the same re- mote manner. It will be understood that the imaging and ophthalmic treatment in- strumentation in this case will generally include a steering and stabilization system which maintains both instruments in alignment and stabilized on the structures appearing in the field of view. However, in view of the small but non-negligible time delays still involved between image acquisition and initiation of diagnostic or treat- ment activity at the examination site, in this aspect of the invention, the invention contemplates that the system control further includes image identification and correlation software which allows the ophthalmologist at site to identify particular positions in the retinal field of view, such as pinpointing particular vessels or tissue structures, and the image acquisition computer includes image recognition software which enables it to identify patterns in the video frames and correlate the identified position with each image frame as it is acquired at the acquisition site. For example, the image recognition software may lock onto a pattern of retinal vessels. Thus, despite the presence of saccades and other abrupt eye movements of the small retinal field which may occur over relatively brief time intervals, the ophthalmic instrumen- tation is aimed at the identified site in the field of view and remote treatment is achieved.
In addition to the foregoing operation, the invention further contemplates that the images provided by acquisition unit are processed for photogrammetric analysis of tissue features and optionally blood flow characteristics. This may be accomplished as follows. An image acquired at the recordation unit is sent to an examination unit, where it is displayed on the screen. As indicated schematically in the figure, such image may include a network of blood vessels having various diameters and lengths. These vessels include both arterial and venous capillaries constituting the blood supply and return network. At the examination unit, the workstation may be equipped with a photogrammetric measurement program which for example may enable the technician to place a cursor on an imaged vessel, and moving the cursor along the vessel while clicking, have the software automatically determine the width of the vessel and the subvessels to which it is connected, as well as the coordinates thereof.
The software for noting coordinates from the pixel positions and linking displayed features in a record, as well as submodules which determine vessel capacities and the like, are straightforward and readily built up from photogrammetric program techniques. Work station protocols may also be implemented to automatically map the vasculature as described above, or to compare two images taken at historically different times and identify or annotate the changes which have occurred, highlighting for the operator features such as vessel erosion, tissue which has changed colour, or other differences. In addition, a user graphical interface allows the specialist to type in diagnostic indications linked to the image, or to a particular feature appearing at a location in the image, so that the image or processed version of it becomes more useful.
With suitable training, the relative health of the vessel, its blood carrying capacity and the like may also be visually observed and noted. This photogrammetric analysis allows a road map of the vasculature and its capacity to be compiled, together with annotations as to the extent of tissue health or disease apparent upon such inspection. Thus, a very precise and well-annotated medical record may be readily compiled and may be compared to a previously taken view for detailed evidence of changes over a period of time, or may be compared, for example, to immediately preceding angiographic views in order to assess the actual degree of blood flow occurring therein. As with the ophthalmologist's note pad entries at examination unit, the measurement entries at examination unit become an annotated image record and are stored in the central library as part of the patient's record.
Unlike a simple medical record system, the present invention changes the dynamics of patient access to care, and the efficiency of delivery of ophthalmic expertise in a manner that solves an enormous current health care dilemma, namely, the obstacle to proper universal screening for diabetic retinopathy. A basic embodiment of the invention being thus disclosed and described, further variations and modifications will occur to those skilled in the art, and all such variations and modifications are encompassed within the scope of the invention as defined in the claims appended hereto.

Claims

Claims:
1. An automatic method for quantifying and/or qualifying pathology indicators and/or artefacts of a fundus image or of a part of a fundus image, comprising
a) calculating at least one acquisition quality measure of the image,
b) comparing the quality measure calculated with standard quality measures for the pathogoly indicators and artefacts to be quantified and/or qualified, and
c) quantifying and/or qualifying pathology indicators and/or artefacts of the fundus image.
2. The method according to any of the preceding claims, wherein the image is presented on a medium selected from dias, paper photos or digital photos.
3. The method according to any of the preceding claims, wherein the image is a colour image.
4. The method according to any of the preceding claims, wherein the quality measure is selected from a contrast measure, a sharpness measure, an interlacing measure, a signal-to-noise ratio, a colour composition measure and an illumination measure.
5. The method according to claim 4, wherein at least one quality measure is a contrast measure.
6. The method according to claim 4 or 5, wherein the contrast measure is the variation in the gradient magnitude image, preferably the coefficient-of-variation of the gradient magnitude.
7. The method according to claim 4, 5 or 6, wherein the contrast measure is calculated robustly, preferably by iteratively discarding out-liers.
8. The method according to claim 7, wherein out-liers are observations deviating more than a number of standard deviations from the mean on a log-scale, wherein said number preferably being in the range of from 1-10.
9. The method according to any of the preceding claims, wherein the interlacing effect is calculated by measures derived from the Fourier transformation.
10. The method according to claim 9, wherein the interlacing measure is based on the ratio of the power at n predetermined frequencies in the column direction and the row direction of the image, wherein n is an integer > 1.
11. The method according to any of the preceding claims, wherein the illumination quality of the image is calculated by measuring the saturation of the image using at least one of the channels, preferably at least two of the channels, such as the red and the green channel of the image.
12. The method according to claim 11 , wherein low saturation value is a quality measure for improper illumination of the image.
13. The method according to any of the preceding claims, wherein the pathology indicators may be indicators for one or more of the following pathologies: glaucoma, cataract, corneal or vitreous opacity, and complications and consequences of for instance diabetic retinopathy, such as extenstive retinal hemorrhages, traction retinal detachment or retinal holes, new retinal vessels or new iris vessels, fibrovascular tissue, vitreous hemorrhage and laser scars.
14. The method according to any of the preceding claims, wherein the artefacts may be one or more of the following: undesired objects projected onto the image, dispersion, diffraction and/or reflection of the optic system.
15. The method according to claim 14, wherein the undesired objects projected onto the image is selected from eye lash, eye lid, or edge of the iris, text in the image, digital errors.
16. The method according to claim 14, wherein the optic system is any part of the optic system from retina, vitreous body, lens, cornea and camera or recorder.
17. The method according to any of the preceding claims, wherein the artefact is a liquid drop or secretes on the lens.
18. The method according to any of the preceding claims, wherein at least two quality measures are calculated, preferably at least three quality measures are calculated.
19. The method according to claim 18, wherein at least a contrast measure and an interlacing measure are calculated.
20. The method according to any of the preceding claims, wherein at least one qual- ity measure is calculated locally for more than one part of the fundus image.
21. The method according to claim 20, wherein a local quality measure is denoted to more than one part of the image.
22. A method for quantifying the quality of a fundus image, comprising
a) calculating at least one quality measure, wherein said quality measure is selected from a contrast measure, a sharpness measure, an interlacing measure, a signal-to-noise ratio, a colour composition measure and an illu- mination measure.
b) quantifiying the quality of the fundus image based on the quality measure calculated.
23. The method according to claim 22, wherein at least one quality measure is a contrast measure.
24. The method according to claim 23, wherein the contrast measure is the variation in the gradient magnitude image, preferably the coefficient-of-variation of the gradient magnitude.
25. The method according to claim 22, 23 or 24, wherein the contrast measure is calculated robustly, preferably by iteratively discarding out-liers.
26. The method according to 25 where out-liers are observations deviating more than a number of standard deviations from the mean on a log-scale, wherein said number preferably being in the range of from 1-10.
27. The method according to any of the preceding claims 22-26, wherein the inter- lacing effect is calculated by a measure derived from the Fourier transformation.
28. The method according to claim 27, wherein the interlacing measure is based on the ratio of the power at n predetermined frequencies in the column direction and the row direction of the image, wherein n is an integer > 1.
29. The method according to any of the preceding claims 22-28, wherein the illumination quality of the image is calculated by measuring the saturation of the image using at least one of the channels, preferably at least two of the channels, such as the red and the green channel of the image.
30. The method according to claim 29, wherein low saturation value is a quality- measure for improper illumination of the image.
31. The method according to any of the preceding claims 22-29, wherein at least a contrast measure and an interlacing measure are calculated.
32. A method for weighting the distinctiveness of a structure and/or a pathological condition in a fundus image, comprising
a) calculating an acquisition quality measure of said fundus image,
b) estimating the gradability measure of said fundus image,
c) detecting the presence or absence of a structure and/or pathological condi- tion in said fundus image, d) weighting the distinctiveness of the detected structure and/or pathological condition with the quality measure and/or with the gradability measure, obtaining a weighted structure and/or pathological condition in said fundus im- age.
33. The method according to claim 32, wherein the structure is selected from vessels, optic nerve head and/or fovea.
34. The method according to claim 32 or 33, wherein the pathological condition is selected from lesions, edema, tortuous vessels, aneurysms and exudates.
35. A method for detecting the presence or absence of a structure and/or a pathological condition in parts of a fundus image, comprising
a) calculating a quality measure of said fundus image parts,
b) estimating the gradability measure of said fundus image,
c) masking parts of said fundus image having a quality measure and/or a gradability measure below a predetermined threshold,
d) detecting the presence or absence of a structure and/or pathological condition in said fundus image.
36. A method for registering at least two fundus images from the same eye, comprising
a) calculating an acquisition quality measure of said fundus image,
b) selecting images having an acquisition quality measure above a predetermined threshold, and
c) registering the fundus images.
37. A method for selecting fundus images for automatic screening, comprising
a) acquiring a fundus image,
b) calculating an acquisition quality measure of said fundus image,
c) optionally estimating a gradability measure of said fundus image,
d) classifying the image fundus quality measure with respect to a predeter- mined threshold, and
e) selecting for automatic screening a fundus image having the quality measure and optionally the gradability measure above the predetermined threshold.
38. The method according to claim 37, wherein the classification is conducted during recordation of the fundus image.
39. The method according to claim 38, further comprising a step of re-acquisition of a fundus image having quality measure below the predetermined threshold.
40. A quantified quality measure of a fundus image, obtainable by
a) calculating at least one quality measure, wherein said quality measure is selected from a contrast measure, a sharpness measure, an interlacing measure, a signal-to-noise ratio, a colour composition measure and an illumination measure.
b) quantifying the quality measure of the fundus image based on the quality measure calculated.
41. A system for quantifying and/or qualifying pathology indicators and/or artefacts of a fundus image or of a part of a fundus image, comprising
a) algorithms for calculating at least one acquisition quality measure of the im- age, b) algorithms for comparing the quality measure calculated with standard quality measures for the pathology indicators and artefacts to be quantified and/or qualified, and
c) algorithms for quantifying and/or qualifying pathology indicators and/or artefacts of the fundus image.
42. A system for quantifying the quality of a fundus image, comprising
a) algorithms for calculating at least one quality measure, wherein said quality measure is selected from a contrast measure, a sharpness measure, an interlacing measure, a signal-to-noise ratio, a colour composition measure and an illumination measure.
b) algorithms for quantifiying the quality of the fundus image based on the quality measure calculated.
PCT/DK2002/000660 2001-10-03 2002-10-03 Quality measure WO2003030073A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
DKPA200101450 2001-10-03
DKPA200101450 2001-10-03
US37501602P 2002-04-25 2002-04-25
US60/375,016 2002-04-25

Publications (1)

Publication Number Publication Date
WO2003030073A1 true WO2003030073A1 (en) 2003-04-10

Family

ID=26069072

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/DK2002/000660 WO2003030073A1 (en) 2001-10-03 2002-10-03 Quality measure

Country Status (1)

Country Link
WO (1) WO2003030073A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2470727A (en) * 2009-06-02 2010-12-08 Univ Aberdeen Processing retinal images using mask data from reference images
GB2491941A (en) * 2011-10-24 2012-12-19 Iriss Medical Technologies Ltd Image processing to detect abnormal eye conditions
WO2013078582A1 (en) * 2011-11-28 2013-06-06 Thomson Licensing Video quality measurement considering multiple artifacts
US8879813B1 (en) 2013-10-22 2014-11-04 Eyenuk, Inc. Systems and methods for automated interest region detection in retinal images
US20150124218A1 (en) * 2011-09-20 2015-05-07 Canon Kabushiki Kaisha Image processing apparatus, ophthalmologic imaging apparatus, image processing method, and storage medium
WO2016032397A1 (en) * 2014-08-25 2016-03-03 Agency For Science, Technology And Research (A*Star) Methods and systems for assessing retinal images, and obtaining information from retinal images
WO2016040317A1 (en) * 2014-09-08 2016-03-17 The Cleveland Clinic Foundation Automated analysis of angiographic images
EP2856930A4 (en) * 2012-05-04 2016-06-15 Uni Politècnica De Catalunya Method for the detection of visual function losses
US9384416B1 (en) 2014-02-20 2016-07-05 University Of South Florida Quantitative image analysis applied to the grading of vitreous haze
US9905008B2 (en) 2013-10-10 2018-02-27 University Of Rochester Automated fundus image field detection and quality assessment
US10278859B2 (en) 2014-10-17 2019-05-07 The Cleveland Clinic Foundation Image-guided delivery of ophthalmic therapeutics

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CIDECIYAN A V: "REGISTRATION OF OCULAR FUNDUS IMAGES", IEEE ENGINEERING IN MEDICINE AND BIOLOGY MAGAZINE, IEEE INC. NEW YORK, US, vol. 14, no. 1, 1995, pages 52 - 58, XP000486770, ISSN: 0739-5175 *
LEE S C ET AL: "Comparison of diagnosis of early retinal lesions of diabetic retinopathy between a computer system and human experts.", ARCHIVES OF OPHTHALMOLOGY. UNITED STATES APR 2001, vol. 119, no. 4, April 2001 (2001-04-01), pages 509 - 515, XP008008870, ISSN: 0003-9950 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2470727A (en) * 2009-06-02 2010-12-08 Univ Aberdeen Processing retinal images using mask data from reference images
US11058293B2 (en) * 2011-09-20 2021-07-13 Canon Kabushiki Kaisha Image processing apparatus, ophthalmologic imaging apparatus, image processing method, and storage medium
US20150124218A1 (en) * 2011-09-20 2015-05-07 Canon Kabushiki Kaisha Image processing apparatus, ophthalmologic imaging apparatus, image processing method, and storage medium
GB2491941A (en) * 2011-10-24 2012-12-19 Iriss Medical Technologies Ltd Image processing to detect abnormal eye conditions
GB2491941B (en) * 2011-10-24 2013-09-25 Iriss Medical Technologies Ltd System and method for identifying eye conditions
US9149179B2 (en) 2011-10-24 2015-10-06 Iriss Medical Technologies Limited System and method for identifying eye conditions
WO2013078582A1 (en) * 2011-11-28 2013-06-06 Thomson Licensing Video quality measurement considering multiple artifacts
US9924167B2 (en) 2011-11-28 2018-03-20 Thomson Licensing Video quality measurement considering multiple artifacts
EP2856930A4 (en) * 2012-05-04 2016-06-15 Uni Politècnica De Catalunya Method for the detection of visual function losses
US9905008B2 (en) 2013-10-10 2018-02-27 University Of Rochester Automated fundus image field detection and quality assessment
US8885901B1 (en) 2013-10-22 2014-11-11 Eyenuk, Inc. Systems and methods for automated enhancement of retinal images
US9008391B1 (en) 2013-10-22 2015-04-14 Eyenuk, Inc. Systems and methods for processing retinal images for screening of diseases or abnormalities
US9002085B1 (en) 2013-10-22 2015-04-07 Eyenuk, Inc. Systems and methods for automatically generating descriptions of retinal images
US8879813B1 (en) 2013-10-22 2014-11-04 Eyenuk, Inc. Systems and methods for automated interest region detection in retinal images
US9384416B1 (en) 2014-02-20 2016-07-05 University Of South Florida Quantitative image analysis applied to the grading of vitreous haze
WO2016032397A1 (en) * 2014-08-25 2016-03-03 Agency For Science, Technology And Research (A*Star) Methods and systems for assessing retinal images, and obtaining information from retinal images
EP3186779A4 (en) * 2014-08-25 2018-04-04 Agency For Science, Technology And Research (A*star) Methods and systems for assessing retinal images, and obtaining information from retinal images
US10325176B2 (en) 2014-08-25 2019-06-18 Agency For Science, Technology And Research Methods and systems for assessing retinal images, and obtaining information from retinal images
WO2016040317A1 (en) * 2014-09-08 2016-03-17 The Cleveland Clinic Foundation Automated analysis of angiographic images
US10628940B2 (en) 2014-09-08 2020-04-21 The Cleveland Clinic Foundation Automated analysis of angiographic images
US10278859B2 (en) 2014-10-17 2019-05-07 The Cleveland Clinic Foundation Image-guided delivery of ophthalmic therapeutics
US10888455B2 (en) 2014-10-17 2021-01-12 The Cleveland Clinic Foundation Image-guided delivery of ophthalmic therapeutics

Similar Documents

Publication Publication Date Title
US7583827B2 (en) Assessment of lesions in an image
Prentašić et al. Diabetic retinopathy image database (DRiDB): a new database for diabetic retinopathy screening programs research
Tobin et al. Detection of anatomic structures in human retinal imagery
Hubbard et al. Methods for evaluation of retinal microvascular abnormalities associated with hypertension/sclerosis in the Atherosclerosis Risk in Communities Study
Quellec et al. Optimal filter framework for automated, instantaneous detection of lesions in retinal images
Chun et al. Objective assessment of corneal staining using digital image analysis
US20120065518A1 (en) Systems and methods for multilayer imaging and retinal injury analysis
Bartlett et al. Use of fundus imaging in quantification of age-related macular change
Köse et al. A statistical segmentation method for measuring age-related macular degeneration in retinal fundus images
Friedman et al. Digital image capture and automated analysis of posterior capsular opacification
US20220319708A1 (en) Automated disease identification based on ophthalmic images
EP1716804A1 (en) Retina function optical measuring method and instrument
WO2003030073A1 (en) Quality measure
WO2003030075A1 (en) Detection of optic nerve head in a fundus image
Azar et al. Classification and detection of diabetic retinopathy
Peli Electro-optic fundus imaging
Noronha et al. Automated diagnosis of diabetes maculopathy: a survey
WO2004082453A2 (en) Assessment of lesions in an image
WO2003030101A2 (en) Detection of vessels in an image
DK1444635T3 (en) Assessment of lesions in an image
Mohammadi et al. The computer based method to diabetic retinopathy assessment in retinal images: a review.
Kaur et al. Preliminary analysis and survey of retinal disease diagnosis through identification and segmentation of bright and dark lesions
Anitha et al. Validating Retinal Color Fundus Databases and Methods for Diabetic Retinopathy Screening
Rehkopf et al. Ophthalmic image processing
Kaur et al. Preliminary Study of Retinal Lesions Classification on Retinal Fundus Images for the Diagnosis of Retinal Diseases

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GM HR HU ID IL IN IS JP KE KG KP KZ LC LK LR LS LT LU LV MA MD MK MN MW MX MZ NO NZ OM PH PT RO RU SD SE SG SI SK SL TJ TM TN TR TZ UA UG US UZ VC VN YU ZA ZM

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ UG ZM ZW AM AZ BY KG KZ RU TJ TM AT BE BG CH CY CZ DK EE ES FI FR GB GR IE IT LU MC PT SE SK TR BF BJ CF CG CI GA GN GQ GW ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP