Skip to content

Implement geometry-based and intensity-based registration algorithms between pairs of retinal 2D scans. Libraries: numpy, opencv, skimage

Notifications You must be signed in to change notification settings

OdedMous/Medical-Image-Registration

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

39 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Medical-Image-Registration

Base-Line image Follow-Up image (registrated) Combined
image image image

Goal

The goal of this project is to implement automatic registration algorithms between two retinal 2D scans. Two different techniques are implemented: geometry-based registration and intensity-based registration.

Background

Registration in Medicine

Usually, there is significant movement between two images of the same patient taken at two different times. This is because the patient is in different poses, because of internal movements (e.g., breathing) and because of other physical changes that occurred in the time that passed between the scans. Registering the images allows one to perform a comparison between them, e.g. to track the differences, or to evaluate the efficacy of a treatment when baseline and follow-up images are provided.

In this project we used pairs of retinal 2D scans of patients who suffered from wet age-related macular degeneration (wet AMD), an eye disease that causes blurred vision or a blind spot in the visual field. It's generally caused by abnormal blood vessels that leak fluid or blood into the macula. The first scan in each pair is a baseline image and the second is an image that was taken later on in time in order to examine how the disease has evolved.

AMD condition Example of blind spot
e image

Rigid Registration

In this project we assume that the anatomical structures of interest in the images retain their shape, and hence a rigid transform is sufficient to align the two images. Rigid registration consists of computing the translation and rotation. Rigid registration of 2D images requires computing three parameters: two translations and one rotation.

tempFileForShare_20220616-155509

Rigid registration algorithms can be categorized into two groups: geometry and intensity based. In geometric-based algorithms, the features to be matched, e.g. points, are first identified in each image and then paired. The sum of the square distances between the points is then minimized to find the rigid transformation. In intensity-based algorithms, a similarity measure is defined between the images. The transformation that maximizes the similarity is the desired one. The rigid registration algorithms are iterative – in each step, a transformation is generated and tested to see if it reduces the sum of the squared distances between the points or increases the similarity between the images.

Geometry-based Registration

Algorithm:

  1. Features Detecting - using SIFT algorithm.
  2. Features Matching - using NN matching: For each feature "a" in img1 take the two nearest features "a1","a2" in img2 and pair them if distance(a, a1) / distance(a, a2) < threshold. Meaning: We want that "a" has only a single match in img2. If it has two possible matches it means "a" isn't a good feature. "a" fulfills this property if distance(a, a1) is small and distance(a, a2) is large.
  3. Homography Computation - The registration matrix is calculated using the picked matches. RANSAC algorithm is used in order to handle outlier matches. The RANSAC algorithm is using the closed-form solution (SVD solution) which I implemented according to this article: "Least-Squares Rigid Motion Using SVD" (https://igl.ethz.ch/projects/ARAP/svd_rot.pdf)

image

Illustration of the matches. Red dots that has no green line which connects between them are outliers matches.

Intensity-based Registration

Algorithm:

  1. Get BL and FU retina blood vessels segmentaions (explained below).
  2. For each angle in [-30, 30]:
    • Rotate FU segmentation in that angle
    • Perform cross-correlation between rotated FU segmentation and BL segmentation, and save the result in a list.
  3. Find the translation vector & angle which provided the minimum error.

Segment Retinal Blood Vessels
Input: An image of human retina
Output: A segmentation of the blood vessels in the retina.

Algorithm:

  1. Convert image from RGB to grayscale.
  2. Contrast Enhancement – In this step a type of histogram equalization named CLAHE (contrast-limited adaptive histogram equalization) is applied, in order to deepen the contrast of the image.
  3. Background Exclusion - In this step I subtract from the image (from the previous step) its blurred image, in order to eliminating background variations in illumination such that the foreground objects (in our case the blood vessels) may be more easily analyzed.
  4. Thresholding - In this step thresholding is applied using isodata algorithm.
  5. Morphological operations - In this step some morphological operations are performed (such as opening and closing) in order to discard noise.
A – Original image B – Contrast Enhancement C – Background Exclusion
image image image
D – Thresholding E – Opening (3,3) F – Opening (5,5)
image image image
G – Opening (7,7) H – Closing (17,17) I – Final segmentation (after remove_small_objects)
image image image

About

Implement geometry-based and intensity-based registration algorithms between pairs of retinal 2D scans. Libraries: numpy, opencv, skimage

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages