Skip to content

Training / Fine-Tuning SAM and Performing Inference on Myotube / Nuclei Images

License

Notifications You must be signed in to change notification settings

Noza23/myovision-sam

Repository files navigation

myovision-sam

LMU: Munich License

Description

This is a sub-repository of the main project myovision. It's purpose is to perform Training/Fine-Tuning of a prompt-based image segmentation foundation model SAM on myotube images.

Installation

Dependencies are structured according to the needs of your use-case and can be installed as follows:

git clone [email protected]:Noza23/myovision-sam.git
cd myovision-sam

# For Base Part containing only Training/Fine-Tuning
pip install .
# For Additional Dependencies for Inference on Myotube and Nuclei Images
pip install .[all]

Training / Fine-Tuning

All modules assosicated with Training/Fine-Tuning are located in the myo_sam.training sub-module. To start Distributed Training/Fine-Tuning of the model:

  • Fill out the configuration file train.yaml.

  • Adjust the train.sh Job submission script to perform training on multiple GPUs (was used on SLURM managed cluster) and start the job using:

    sbatch train.sh

    or locally using torchrun with desired flags and arguments:

    torchrun train.py

    or on a single GPU:

    python3 train.py
    - Note: The Snapshots are overwritten by default, so make a copy of the model before starting the training.

    Logging and Monitoring

    • myosam.log file which will be created in the execution directory will contain text logs of the training process.
    • runs directory which will be created in the execution directory will contain Tensorboard logs for monitoring the training process.

    Adjust to your Data

    To adjust training to your data just change the dataloader in myo_sam.training.dataset sub-module.

    Model Checkpoints

Inference

All modules assosicated with Inference are located in the myo_sam.inference sub-module. To perform Inference on Myotube & Nuclei Images in batch mode:

  • Fill out the configuration file inference.yaml.

  • Adjust the inference.sh Job submission script to perform inference on multiple GPUs (was used on SLURM managed cluster) and start the job using:

    sbatch inference.sh

    or locally using torchrun with desired flags and arguments:

    torchrun inference.py

    or on a single GPU:

    python3 inference.py
    - Note: Between Myotube and Nuclei Images Dirrectories you should have the following naming convention:
      - Myotube Images: `x_{myotube_image_suffix}.png`
      - Nuclei Images: `x_{nuclei_image_suffix}.png`
      Meaning: paris of images should have the same base name until the last underscore.

Releases

No releases published

Packages

No packages published