Skip to content
This repository has been archived by the owner on Mar 21, 2024. It is now read-only.

ENH: Move docs folder to sphinx-docs #768

Merged
merged 26 commits into from
Aug 4, 2022
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
3f49383
📝Move docs folder to sphinx-docs
peterhessey Jul 21, 2022
8f8513d
Trigger build for new URL
peterhessey Jul 21, 2022
3390a4a
📝 Fix build to include README + CHANGLOG
peterhessey Jul 22, 2022
6487d06
📝 Add back in link fixing
peterhessey Jul 22, 2022
da3f4de
🐛 Fix docs links
peterhessey Jul 22, 2022
a319eac
🚨 📝 Fix markdown linting
peterhessey Jul 22, 2022
2591db6
📝 Change relative links to GitHub ones permanently
peterhessey Jul 22, 2022
3d201e2
📝 Replace more relative paths
peterhessey Jul 22, 2022
9fc25a9
⚡️ 📝 Switch to symlinks
peterhessey Jul 25, 2022
e8c6852
📝 Replace README in toctree
peterhessey Jul 25, 2022
e2702b5
📝 Update README
peterhessey Jul 26, 2022
46c1966
🐛 Attempt to fix images not rendering
peterhessey Jul 26, 2022
7f2c2e6
🐛 Fix broken links
peterhessey Jul 26, 2022
8156e79
Remove IDE settings from gitignore
peterhessey Jul 27, 2022
678b616
⚡️ Move docs to `docs/` and add Makefile back
peterhessey Jul 27, 2022
33e3a06
🙈 Update gitignore
peterhessey Jul 27, 2022
27e2893
♻️ ⚡️ Resolve review comments and change theme
peterhessey Jul 28, 2022
07ee50e
📝 🔀 Rebase + markdown linting
peterhessey Aug 2, 2022
686ca06
🔥 Remove build files (again)
peterhessey Aug 2, 2022
cc54f15
🙈 Remove pieline-breaking symlink
peterhessey Aug 2, 2022
11ecd1f
➕ Add furo to sphinx dependencies
peterhessey Aug 2, 2022
5353ae3
📌 Move sphinx deps to environment.yml + lock
peterhessey Aug 3, 2022
f5d3f76
📝 Improve doc folder structure
peterhessey Aug 3, 2022
a30f609
Return to copying instead of symlink
peterhessey Aug 3, 2022
5985fda
📝 Update indexing and titles
peterhessey Aug 3, 2022
efc5f9e
📝 Address review comments
peterhessey Aug 3, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
📝 Update indexing and titles
  • Loading branch information
peterhessey committed Aug 3, 2022
commit 5985fdacb8394a77be95e9dc50c91a0174642549
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -86,6 +86,8 @@ instance/
# Sphinx documentation
docs/build/
docs/source/md/CHANGELOG.md
docs/source/md/README.md
docs/source/md/LICENSE

# PyBuilder
target/
Expand Down
18 changes: 15 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@

[![Build Status](https://innereye.visualstudio.com/InnerEye/_apis/build/status/InnerEye-DeepLearning/InnerEye-DeepLearning-PR?branchName=main)](https://innereye.visualstudio.com/InnerEye/_build?definitionId=112&branchName=main)

InnerEye-DeepLearning (IE-DL) is a toolbox for easily training deep learning models on 3D medical images. Simple to run both locally and in the cloud with AzureML, it allows users to train and run inference on the following:
InnerEye-DeepLearning (IE-DL) is a toolbox for easily training deep learning models on 3D medical images. Simple to run both locally and in the cloud with [AzureML](https://docs.microsoft.com/en-gb/azure/machine-learning/), it allows users to train and run inference on the following:

- Segmentation models.
- Classification and regression models.
- Any PyTorch Lightning model, via a [bring-your-own-model setup](https://innereye-deeplearning.readthedocs.io/docs/bring_your_own_model.html).
- Any PyTorch Lightning model, via a [bring-your-own-model setup](docs/source/md/bring_your_own_model.md).

In addition, this toolbox supports:

Expand All @@ -21,7 +21,7 @@ For all documentation, including setup guides and APIs, please refer to the [IE-

## Quick Setup

This quick setup assumes you are using a machine running Ubuntu with Git, Git LFS, Conda and Python 3.7+ installed. Please refer to the [setup guide](https://innereye-deeplearning.readthedocs.io/docs/environment.html) for more detailed instructions on getting InnerEye set up with other operating systems and installing the above prerequisites.
This quick setup assumes you are using a machine running Ubuntu with Git, Git LFS, Conda and Python 3.7+ installed. Please refer to the [setup guide](docs/source/md/environment.md) for more detailed instructions on getting InnerEye set up with other operating systems and installing the above prerequisites.

1. Clone the InnerEye-DeepLearning repo by running the following command:

Expand Down Expand Up @@ -60,6 +60,18 @@ Details can be found [here](docs/source/md/deploy_on_aml.md).

![docs/deployment.png](docs/source/images/deployment.png)

## Benefits of InnerEye-DeepLearning

In combiniation with the power of AzureML, InnerEye provides the following benefits:

- **Traceability**: AzureML keeps a full record of all experiments that were executed, including a snapshot of the code. Tags are added to the experiments automatically, that can later help filter and find old experiments.
- **Transparency**: All team members have access to each other's experiments and results.
- **Reproducibility**: Two model training runs using the same code and data will result in exactly the same metrics. All sources of randomness are controlled for.
- **Cost reduction**: Using AzureML, all compute resources (virtual machines, VMs) are requested at the time of starting the training job and freed up at the end. Idle VMs will not incur costs. Azure low priority nodes can be used to further reduce costs (up to 80% cheaper).
- **Scalability**: Large numbers of VMs can be requested easily to cope with a burst in jobs.

Despite the cloud focus, InnerEye is designed to be able to run locally too, which is important for model prototyping, debugging, and in cases where the cloud can't be used. Therefore, if you already have GPU machines available, you will be able to utilize them with the InnerEye toolbox.

## Licensing

[MIT License](/LICENSE)
Expand Down
9 changes: 5 additions & 4 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,8 @@ def replace_in_file(filepath: Path, original_str: str, replace_str: str) -> None
files_to_copy = ["CHANGELOG.md", "README.md"]
for file_to_copy in files_to_copy:
copy_path = docs_path / file_to_copy
if not copy_path.exists():
source_path = repository_root / file_to_copy
shutil.copy(source_path, copy_path)
replace_in_file(copy_path, "docs/source/md/", "")
source_path = repository_root / file_to_copy
shutil.copy(source_path, copy_path)
replace_in_file(copy_path, "docs/source/md/", "")
replace_in_file(copy_path, "/LICENSE", "https://github.com/microsoft/InnerEye-DeepLearning/blob/main/LICENSE")
replace_in_file(copy_path, "docs/source/images/", "../images/")
16 changes: 8 additions & 8 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,16 +8,21 @@ InnerEye-DeepLearning Documentation

.. toctree::
:maxdepth: 1
:caption: TESTING THIS CAPTION HERE
:caption: Overview and user guides

md/innereye_deeplearning.md
md/WSL.md
md/README.md
md/environment.md
md/WSL.md
md/hello_world_model.md
md/setting_up_aml.md
md/creating_dataset.md
md/building_models.md
md/sample_tasks.md
md/bring_your_own_model.md
md/debugging_and_monitoring.md
md/model_diagnostics.md
md/move_model.md
md/hippocampus_model.md

.. toctree::
:maxdepth: 1
Expand All @@ -26,14 +31,9 @@ InnerEye-DeepLearning Documentation
md/pull_requests.md
md/testing.md
md/contributing.md

md/hello_world_model.md
md/deploy_on_aml.md
md/bring_your_own_model.md
md/fastmri.md
md/innereye_as_submodule.md
md/model_diagnostics.md
md/move_model.md
md/releases.md
md/self_supervised_models.md
md/CHANGELOG.md
peterhessey marked this conversation as resolved.
Show resolved Hide resolved
Expand Down
2 changes: 1 addition & 1 deletion docs/source/md/WSL.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# How to use the Windows Subsystem for Linux (WSL2) for development
# Windows Subsystem for Linux (WSL2)

We are aware of two issues with running our toolbox on Windows:

Expand Down
2 changes: 1 addition & 1 deletion docs/source/md/debugging_and_monitoring.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Debugging and Monitoring Jobs
# Debugging and Monitoring

## Using TensorBoard to monitor AzureML jobs

Expand Down
2 changes: 1 addition & 1 deletion docs/source/md/environment.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Set up InnerEye-DeepLearning
# Setup

## Operating System

Expand Down
22 changes: 11 additions & 11 deletions docs/source/md/hippocampus_model.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,19 +12,19 @@ Please note that this model is intended for research purposes only. You are resp

## Usage

The following instructions assume you have completed the preceding setup steps in the [InnerEye README](https://github.com/microsoft/InnerEye-DeepLearning/), in particular, [Setting up Azure Machine Learning](https://github.com/microsoft/InnerEye-DeepLearning/blob/main/docs/setting_up_aml.md).
The following instructions assume you have completed the preceding setup steps in the [InnerEye README](https://github.com/microsoft/InnerEye-DeepLearning/), in particular, [Setting up Azure Machine Learning](setting_up_aml.md).

### Create an Azure ML Dataset

To evaluate this model on your own data, you will first need to register an [Azure ML Dataset](https://docs.microsoft.com/en-us/azure/machine-learning/v1/how-to-create-register-datasets). You can follow the instructions in the InnerEye repo for [creating datasets](https://github.com/microsoft/InnerEye-DeepLearning/blob/main/docs/creating_dataset.md) in order to do this.
To evaluate this model on your own data, you will first need to register an [Azure ML Dataset](https://docs.microsoft.com/en-us/azure/machine-learning/v1/how-to-create-register-datasets). You can follow the instructions in the for [creating datasets](creating_dataset.md) in order to do this.

## Downloading the model

The saved weights from the trained Hippocampus model can be downloaded along with the source code used to train it from [our GitHub releases page](https://github.com/microsoft/InnerEye-DeepLearning/releases/tag/v0.5).

### Registering a model in Azure ML

To evaluate the model in Azure ML, you must first [register an Azure ML Model](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.model.model?view=azure-ml-py#remarks). To register the Hippocampus model in your AML Workspace, unpack the source code downloaded in the previous step and follow InnerEye's [instructions to upload models to Azure ML](https://github.com/microsoft/InnerEye-DeepLearning/blob/main/docs/move_model.md).
To evaluate the model in Azure ML, you must first [register an Azure ML Model](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.model.model?view=azure-ml-py#remarks). To register the Hippocampus model in your AML Workspace, unpack the source code downloaded in the previous step and follow InnerEye's [instructions to upload models to Azure ML](move_model.md).

Run the following from a folder that contains both the `ENVIRONMENT/` and `MODEL/` folders (these exist inside the downloaded model files):

Expand All @@ -44,7 +44,7 @@ python InnerEye/Scripts/move_model.py \

### Evaluating the model

You can evaluate the model either in Azure ML or locally using the downloaded checkpoint files. These 2 scenarios are described in more detail, along with instructions in [testing an existing model](https://github.com/microsoft/InnerEye-DeepLearning/blob/main/docs/building_models.md#testing-an-existing-model).
You can evaluate the model either in Azure ML or locally using the downloaded checkpoint files. These 2 scenarios are described in more detail, along with instructions in [testing an existing model](building_models.md#testing-an-existing-model).

For example, to evaluate the model on your Dataset in Azure ML, run the following from within the directory `*/MODEL/final_ensemble_model/`

Expand Down Expand Up @@ -73,9 +73,9 @@ To deploy this model, see the instructions in the [InnerEye README](https://gith

---

# Hippocampal Segmentation Model Card
## Hippocampal Segmentation Model Card

## Model details
### Model details

- Organisation: Biomedical Imaging Team at Microsoft Research, Cambridge UK.
- Model date: 5th July 2022.
Expand All @@ -85,27 +85,27 @@ To deploy this model, see the instructions in the [InnerEye README](https://gith
- License: The model is released under MIT license as described [here](https://github.com/microsoft/InnerEye-DeepLearning/blob/main/LICENSE).
- Contact: [email protected].

## Limitations
### Limitations

This model has been trained on a subset of the ADNI dataset. There have been various phases of ADNI spanning different time periods. In this Model Card we refer to the original, or ADNI 1, study. This dataset comprises scans and metadata from patients between the ages of 55-90 from 57 different sites across the US and Canada [source](https://adni.loni.usc.edu/study-design/#background-container). Therefore a major limitation of this model would be the ability to generalise to patients outside of this demographic. Another limitation is that The MRI protocol for ADNI1 (which was collected between 2004-2009) focused on imaging on 1.5T scanners [source](https://adni.loni.usc.edu/methods/mri-tool/mri-analysis/). Modern scanners may have higher field strengths and therefore different levels of contrast which could lead to different performance from the results we report.

The results of this model have not been validated by clinical experts. We expect the user to evaluate the result

## Intended Uses
### Intended Uses

This model is for research purposes only. It is intended to be used for the task of segmenting hippocampi from brain MRI scans. Any other task is out of scope for this model.

## About the data
### About the data

The model was trained on 998 pairs of MRI segmentation + segmentation. The model was further validated on 127 pairs of images and tested on 125 pairs. A further 317 pairs were retained as a held-out test set for the final evaluation of the model, which is what we report performance on.

All of this data comes from the Alzheimer's Disease Neuroimaging Initiative study [link to website](https://adni.loni.usc.edu/). The data is publicly available, but requires signing a Data Use Agreement before access is granted.

## About the ground-truth segmentations
### About the ground-truth segmentations

The segmentations were also downloaded from the ADNI dataset. They were created semi-automatically using software from [Medtronic Surgical Navigation Technologies](https://www.medtronic.com/us-en/healthcare-professionals/products/neurological/surgical-navigation-systems.html). Further information is available on the [ADNI website](https://adni.loni.usc.edu/).

## Metrics
### Metrics

Note that due to the ADNI Data Usage Agreement we are only able to share aggregate-level metrics from our evaluation. Evaluation is performed on a held out test set of 252 pairs of MRI + segmentation pairs from the ADNI dataset.

Expand Down
Loading