Skip to content
This repository has been archived by the owner on Mar 21, 2024. It is now read-only.

Commit

Permalink
ENH: Move docs to ReadTheDocs (#768)
Browse files Browse the repository at this point in the history
* 📝Move docs folder to sphinx-docs

* Trigger build for new URL

* 📝 Fix build to include README + CHANGLOG

* 📝 Add back in link fixing

* 🐛 Fix docs links

* 🚨 📝 Fix markdown linting

* 📝 Change relative links to GitHub ones permanently

* 📝 Replace more relative paths

* ⚡️ 📝 Switch to symlinks

* 📝 Replace README in toctree

* 📝 Update README

* 🐛 Attempt to fix images not rendering

* 🐛 Fix broken links

* Remove IDE settings from gitignore

* ⚡️ Move docs to `docs/` and add Makefile back

* 🙈 Update gitignore

* ♻️ ⚡️ Resolve review comments and change theme

* 📝 🔀 Rebase + markdown linting

* 🔥 Remove build files (again)

* 🙈 Remove pieline-breaking symlink

* ➕ Add furo to sphinx dependencies

* 📌 Move sphinx deps to environment.yml + lock

* 📝 Improve doc folder structure

* Return to copying instead of symlink

* 📝 Update indexing and titles

* 📝 Address review comments
  • Loading branch information
peterhessey committed Aug 4, 2022
1 parent 4e12cec commit c1b363e
Show file tree
Hide file tree
Showing 49 changed files with 437 additions and 392 deletions.
6 changes: 4 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -84,8 +84,10 @@ instance/
.scrapy

# Sphinx documentation
sphinx-docs/build/
sphinx-docs/source/md/
docs/build/
docs/source/md/CHANGELOG.md
docs/source/md/README.md
docs/source/md/LICENSE

# PyBuilder
target/
Expand Down
6 changes: 1 addition & 5 deletions .readthedocs.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,7 @@ build:
python: miniconda3-4.7

sphinx:
configuration: sphinx-docs/source/conf.py

python:
install:
- requirements: sphinx-docs/requirements.txt
configuration: docs/source/conf.py

conda:
environment: environment.yml
4 changes: 2 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -181,7 +181,7 @@ institution id and series id columns are missing.
- ([#441](https://github.com/microsoft/InnerEye-DeepLearning/pull/441)) Add script to move models from one AzureML workspace to another: `python InnerEye/Scripts/move_model.py`
- ([#417](https://github.com/microsoft/InnerEye-DeepLearning/pull/417)) Added a generic way of adding PyTorch Lightning
models to the toolbox. It is now possible to train almost any Lightning model with the InnerEye toolbox in AzureML,
with only minimum code changes required. See [the MD documentation](docs/bring_your_own_model.md) for details.
with only minimum code changes required. See [the MD documentation](docs/source/md/bring_your_own_model.md) for details.
- ([#430](https://github.com/microsoft/InnerEye-DeepLearning/pull/430)) Update conversion to 1.0.1 InnerEye-DICOM-RT to
add: manufacturer, SoftwareVersions, Interpreter and ROIInterpretedTypes.
- ([#385](https://github.com/microsoft/InnerEye-DeepLearning/pull/385)) Add the ability to train a model on multiple
Expand Down Expand Up @@ -354,7 +354,7 @@ console for easier diagnostics.

#### Fixed

- When registering a model, it now has a consistent folder structured, described [here](docs/deploy_on_aml.md). This
- When registering a model, it now has a consistent folder structured, described [here](docs/source/md/deploy_on_aml.md). This
folder structure is present irrespective of using InnerEye as a submodule or not. In particular, exactly 1 Conda
environment will be contained in the model.

Expand Down
94 changes: 26 additions & 68 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,53 +2,26 @@

[![Build Status](https://innereye.visualstudio.com/InnerEye/_apis/build/status/InnerEye-DeepLearning/InnerEye-DeepLearning-PR?branchName=main)](https://innereye.visualstudio.com/InnerEye/_build?definitionId=112&branchName=main)

## Overview
InnerEye-DeepLearning (IE-DL) is a toolbox for easily training deep learning models on 3D medical images. Simple to run both locally and in the cloud with [AzureML](https://docs.microsoft.com/en-gb/azure/machine-learning/), it allows users to train and run inference on the following:

This is a deep learning toolbox to train models on medical images (or more generally, 3D images).
It integrates seamlessly with cloud computing in Azure.
- Segmentation models.
- Classification and regression models.
- Any PyTorch Lightning model, via a [bring-your-own-model setup](docs/source/md/bring_your_own_model.md).

On the modelling side, this toolbox supports
In addition, this toolbox supports:

- Segmentation models
- Classification and regression models
- Adding cloud support to any PyTorch Lightning model, via a [bring-your-own-model setup](docs/bring_your_own_model.md)

On the user side, this toolbox focusses on enabling machine learning teams to achieve more. It is cloud-first, and
relies on [Azure Machine Learning Services (AzureML)](https://docs.microsoft.com/en-gb/azure/machine-learning/) for execution,
bookkeeping, and visualization. Taken together, this gives:

- **Traceability**: AzureML keeps a full record of all experiments that were executed, including a snapshot of
the code. Tags are added to the experiments automatically, that can later help filter and find old experiments.
- **Transparency**: All team members have access to each other's experiments and results.
- **Reproducibility**: Two model training runs using the same code and data will result in exactly the same metrics. All
sources of randomness like multithreading are controlled for.
- **Cost reduction**: Using AzureML, all compute (virtual machines, VMs) is requested at the time of starting the
training job, and freed up at the end. Idle VMs will not incur costs. In addition, Azure low priority
nodes can be used to further reduce costs (up to 80% cheaper).
- **Scale out**: Large numbers of VMs can be requested easily to cope with a burst in jobs.

Despite the cloud focus, all training and model testing works just as well on local compute, which is important for
model prototyping, debugging, and in cases where the cloud can't be used. In particular, if you already have GPU
machines available, you will be able to utilize them with the InnerEye toolbox.

In addition, our toolbox supports:

- Cross-validation using AzureML's built-in support, where the models for
individual folds are trained in parallel. This is particularly important for the long-running training jobs
often seen with medical images.
- Hyperparameter tuning using
[Hyperdrive](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters).
- Cross-validation using AzureML, where the models for individual folds are trained in parallel. This is particularly important for the long-running training jobs often seen with medical images.
- Hyperparameter tuning using [Hyperdrive](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters).
- Building ensemble models.
- Easy creation of new models via a configuration-based approach, and inheritance from an existing
architecture.
- Easy creation of new models via a configuration-based approach, and inheritance from an existing architecture.

Once training in AzureML is done, the models can be deployed from within AzureML.
## Documentation

## Quick Setup
For all documentation, including setup guides and APIs, please refer to the [IE-DL Read the Docs site](https://innereye-deeplearning.readthedocs.io/#).

This quick setup assumes you are using a machine running Ubuntu with Git, Git LFS, Conda and Python 3.7+ installed. Please refer to the [setup guide](docs/environment.md) for more detailed instructions on getting InnerEye set up with other operating systems and installing the above prerequisites.
## Quick Setup

### Instructions
This quick setup assumes you are using a machine running Ubuntu with Git, Git LFS, Conda and Python 3.7+ installed. Please refer to the [setup guide](docs/source/md/environment.md) for more detailed instructions on getting InnerEye set up with other operating systems and installing the above prerequisites.

1. Clone the InnerEye-DeepLearning repo by running the following command:

Expand All @@ -73,46 +46,31 @@ If the above runs with no errors: Congratulations! You have successfully built y
If it fails, please check the
[troubleshooting page on the Wiki](https://github.com/microsoft/InnerEye-DeepLearning/wiki/Issues-with-code-setup-and-the-HelloWorld-model).

## Other Documentation

Further detailed instructions, including setup in Azure, are here:

1. [Setting up your environment](docs/environment.md)
1. [Setting up Azure Machine Learning](docs/setting_up_aml.md)
1. [Training a simple segmentation model in Azure ML](docs/hello_world_model.md)
1. [Creating a dataset](docs/creating_dataset.md)
1. [Building models in Azure ML](docs/building_models.md)
1. [Sample Segmentation and Classification tasks](docs/sample_tasks.md)
1. [Debugging and monitoring models](docs/debugging_and_monitoring.md)
1. [Model diagnostics](docs/model_diagnostics.md)
1. [Move a model to a different workspace](docs/move_model.md)
1. [Working with FastMRI models](docs/fastmri.md)
1. [Active label cleaning and noise robust learning toolbox](https://github.com/microsoft/InnerEye-DeepLearning/blob/1606729c7a16e1bfeb269694314212b6e2737939/InnerEye-DataQuality/README.md)
1. [Using InnerEye as a git submodule](docs/innereye_as_submodule.md)
1. [Evaluating pre-trained models](docs/hippocampus_model.md)

## Deployment
## Full InnerEye Deployment

We offer a companion set of open-sourced tools that help to integrate trained CT segmentation models with clinical
software systems:

- The [InnerEye-Gateway](https://github.com/microsoft/InnerEye-Gateway) is a Windows service running in a DICOM network,
that can route anonymized DICOM images to an inference service.
- The [InnerEye-Inference](https://github.com/microsoft/InnerEye-Inference) component offers a REST API that integrates
with the InnnEye-Gateway, to run inference on InnerEye-DeepLearning models.
with the InnerEye-Gateway, to run inference on InnerEye-DeepLearning models.

Details can be found [here](docs/source/md/deploy_on_aml.md).

Details can be found [here](docs/deploy_on_aml.md).
![docs/deployment.png](docs/source/images/deployment.png)

![docs/deployment.png](docs/deployment.png)
## Benefits of InnerEye-DeepLearning

## More information
In combiniation with the power of AzureML, InnerEye provides the following benefits:

- **Traceability**: AzureML keeps a full record of all experiments that were executed, including a snapshot of the code. Tags are added to the experiments automatically, that can later help filter and find old experiments.
- **Transparency**: All team members have access to each other's experiments and results.
- **Reproducibility**: Two model training runs using the same code and data will result in exactly the same metrics. All sources of randomness are controlled for.
- **Cost reduction**: Using AzureML, all compute resources (virtual machines, VMs) are requested at the time of starting the training job and freed up at the end. Idle VMs will not incur costs. Azure low priority nodes can be used to further reduce costs (up to 80% cheaper).
- **Scalability**: Large numbers of VMs can be requested easily to cope with a burst in jobs.

1. [Project InnerEye](https://www.microsoft.com/en-us/research/project/medical-image-analysis/)
1. [Releases](docs/releases.md)
1. [Changelog](CHANGELOG.md)
1. [Testing](docs/testing.md)
1. [How to do pull requests](docs/pull_requests.md)
1. [Contributing](docs/contributing.md)
Despite the cloud focus, InnerEye is designed to be able to run locally too, which is important for model prototyping, debugging, and in cases where the cloud can't be used. Therefore, if you already have GPU machines available, you will be able to utilize them with the InnerEye toolbox.

## Licensing

Expand Down

0 comments on commit c1b363e

Please sign in to comment.