Skip to content
This repository has been archived by the owner on Mar 21, 2024. It is now read-only.

ENH: Move docs folder to sphinx-docs #768

Merged
merged 26 commits into from
Aug 4, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
3f49383
📝Move docs folder to sphinx-docs
peterhessey Jul 21, 2022
8f8513d
Trigger build for new URL
peterhessey Jul 21, 2022
3390a4a
📝 Fix build to include README + CHANGLOG
peterhessey Jul 22, 2022
6487d06
📝 Add back in link fixing
peterhessey Jul 22, 2022
da3f4de
🐛 Fix docs links
peterhessey Jul 22, 2022
a319eac
🚨 📝 Fix markdown linting
peterhessey Jul 22, 2022
2591db6
📝 Change relative links to GitHub ones permanently
peterhessey Jul 22, 2022
3d201e2
📝 Replace more relative paths
peterhessey Jul 22, 2022
9fc25a9
⚡️ 📝 Switch to symlinks
peterhessey Jul 25, 2022
e8c6852
📝 Replace README in toctree
peterhessey Jul 25, 2022
e2702b5
📝 Update README
peterhessey Jul 26, 2022
46c1966
🐛 Attempt to fix images not rendering
peterhessey Jul 26, 2022
7f2c2e6
🐛 Fix broken links
peterhessey Jul 26, 2022
8156e79
Remove IDE settings from gitignore
peterhessey Jul 27, 2022
678b616
⚡️ Move docs to `docs/` and add Makefile back
peterhessey Jul 27, 2022
33e3a06
🙈 Update gitignore
peterhessey Jul 27, 2022
27e2893
♻️ ⚡️ Resolve review comments and change theme
peterhessey Jul 28, 2022
07ee50e
📝 🔀 Rebase + markdown linting
peterhessey Aug 2, 2022
686ca06
🔥 Remove build files (again)
peterhessey Aug 2, 2022
cc54f15
🙈 Remove pieline-breaking symlink
peterhessey Aug 2, 2022
11ecd1f
➕ Add furo to sphinx dependencies
peterhessey Aug 2, 2022
5353ae3
📌 Move sphinx deps to environment.yml + lock
peterhessey Aug 3, 2022
f5d3f76
📝 Improve doc folder structure
peterhessey Aug 3, 2022
a30f609
Return to copying instead of symlink
peterhessey Aug 3, 2022
5985fda
📝 Update indexing and titles
peterhessey Aug 3, 2022
efc5f9e
📝 Address review comments
peterhessey Aug 3, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 4 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -84,8 +84,10 @@ instance/
.scrapy

# Sphinx documentation
sphinx-docs/build/
sphinx-docs/source/md/
docs/build/
docs/source/md/CHANGELOG.md
docs/source/md/README.md
docs/source/md/LICENSE

# PyBuilder
target/
Expand Down
6 changes: 1 addition & 5 deletions .readthedocs.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,7 @@ build:
python: miniconda3-4.7

sphinx:
configuration: sphinx-docs/source/conf.py

python:
install:
- requirements: sphinx-docs/requirements.txt
configuration: docs/source/conf.py
peterhessey marked this conversation as resolved.
Show resolved Hide resolved

conda:
environment: environment.yml
4 changes: 2 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -181,7 +181,7 @@ institution id and series id columns are missing.
- ([#441](https://github.com/microsoft/InnerEye-DeepLearning/pull/441)) Add script to move models from one AzureML workspace to another: `python InnerEye/Scripts/move_model.py`
- ([#417](https://github.com/microsoft/InnerEye-DeepLearning/pull/417)) Added a generic way of adding PyTorch Lightning
models to the toolbox. It is now possible to train almost any Lightning model with the InnerEye toolbox in AzureML,
with only minimum code changes required. See [the MD documentation](docs/bring_your_own_model.md) for details.
with only minimum code changes required. See [the MD documentation](docs/source/md/bring_your_own_model.md) for details.
- ([#430](https://github.com/microsoft/InnerEye-DeepLearning/pull/430)) Update conversion to 1.0.1 InnerEye-DICOM-RT to
add: manufacturer, SoftwareVersions, Interpreter and ROIInterpretedTypes.
- ([#385](https://github.com/microsoft/InnerEye-DeepLearning/pull/385)) Add the ability to train a model on multiple
Expand Down Expand Up @@ -354,7 +354,7 @@ console for easier diagnostics.

#### Fixed

- When registering a model, it now has a consistent folder structured, described [here](docs/deploy_on_aml.md). This
- When registering a model, it now has a consistent folder structured, described [here](docs/source/md/deploy_on_aml.md). This
folder structure is present irrespective of using InnerEye as a submodule or not. In particular, exactly 1 Conda
environment will be contained in the model.

Expand Down
94 changes: 26 additions & 68 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,53 +2,26 @@

[![Build Status](https://innereye.visualstudio.com/InnerEye/_apis/build/status/InnerEye-DeepLearning/InnerEye-DeepLearning-PR?branchName=main)](https://innereye.visualstudio.com/InnerEye/_build?definitionId=112&branchName=main)

## Overview
InnerEye-DeepLearning (IE-DL) is a toolbox for easily training deep learning models on 3D medical images. Simple to run both locally and in the cloud with [AzureML](https://docs.microsoft.com/en-gb/azure/machine-learning/), it allows users to train and run inference on the following:

This is a deep learning toolbox to train models on medical images (or more generally, 3D images).
It integrates seamlessly with cloud computing in Azure.
- Segmentation models.
- Classification and regression models.
- Any PyTorch Lightning model, via a [bring-your-own-model setup](docs/source/md/bring_your_own_model.md).

On the modelling side, this toolbox supports
In addition, this toolbox supports:

- Segmentation models
- Classification and regression models
- Adding cloud support to any PyTorch Lightning model, via a [bring-your-own-model setup](docs/bring_your_own_model.md)

On the user side, this toolbox focusses on enabling machine learning teams to achieve more. It is cloud-first, and
relies on [Azure Machine Learning Services (AzureML)](https://docs.microsoft.com/en-gb/azure/machine-learning/) for execution,
bookkeeping, and visualization. Taken together, this gives:

- **Traceability**: AzureML keeps a full record of all experiments that were executed, including a snapshot of
the code. Tags are added to the experiments automatically, that can later help filter and find old experiments.
- **Transparency**: All team members have access to each other's experiments and results.
- **Reproducibility**: Two model training runs using the same code and data will result in exactly the same metrics. All
sources of randomness like multithreading are controlled for.
- **Cost reduction**: Using AzureML, all compute (virtual machines, VMs) is requested at the time of starting the
training job, and freed up at the end. Idle VMs will not incur costs. In addition, Azure low priority
nodes can be used to further reduce costs (up to 80% cheaper).
- **Scale out**: Large numbers of VMs can be requested easily to cope with a burst in jobs.

Despite the cloud focus, all training and model testing works just as well on local compute, which is important for
model prototyping, debugging, and in cases where the cloud can't be used. In particular, if you already have GPU
machines available, you will be able to utilize them with the InnerEye toolbox.

In addition, our toolbox supports:

- Cross-validation using AzureML's built-in support, where the models for
individual folds are trained in parallel. This is particularly important for the long-running training jobs
often seen with medical images.
- Hyperparameter tuning using
[Hyperdrive](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters).
- Cross-validation using AzureML, where the models for individual folds are trained in parallel. This is particularly important for the long-running training jobs often seen with medical images.
- Hyperparameter tuning using [Hyperdrive](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters).
- Building ensemble models.
- Easy creation of new models via a configuration-based approach, and inheritance from an existing
architecture.
- Easy creation of new models via a configuration-based approach, and inheritance from an existing architecture.

Once training in AzureML is done, the models can be deployed from within AzureML.
## Documentation

## Quick Setup
For all documentation, including setup guides and APIs, please refer to the [IE-DL Read the Docs site](https://innereye-deeplearning.readthedocs.io/#).

This quick setup assumes you are using a machine running Ubuntu with Git, Git LFS, Conda and Python 3.7+ installed. Please refer to the [setup guide](docs/environment.md) for more detailed instructions on getting InnerEye set up with other operating systems and installing the above prerequisites.
## Quick Setup

### Instructions
This quick setup assumes you are using a machine running Ubuntu with Git, Git LFS, Conda and Python 3.7+ installed. Please refer to the [setup guide](docs/source/md/environment.md) for more detailed instructions on getting InnerEye set up with other operating systems and installing the above prerequisites.

1. Clone the InnerEye-DeepLearning repo by running the following command:

Expand All @@ -73,46 +46,31 @@ If the above runs with no errors: Congratulations! You have successfully built y
If it fails, please check the
[troubleshooting page on the Wiki](https://github.com/microsoft/InnerEye-DeepLearning/wiki/Issues-with-code-setup-and-the-HelloWorld-model).

## Other Documentation

Further detailed instructions, including setup in Azure, are here:

1. [Setting up your environment](docs/environment.md)
1. [Setting up Azure Machine Learning](docs/setting_up_aml.md)
1. [Training a simple segmentation model in Azure ML](docs/hello_world_model.md)
1. [Creating a dataset](docs/creating_dataset.md)
1. [Building models in Azure ML](docs/building_models.md)
1. [Sample Segmentation and Classification tasks](docs/sample_tasks.md)
1. [Debugging and monitoring models](docs/debugging_and_monitoring.md)
1. [Model diagnostics](docs/model_diagnostics.md)
1. [Move a model to a different workspace](docs/move_model.md)
1. [Working with FastMRI models](docs/fastmri.md)
1. [Active label cleaning and noise robust learning toolbox](https://github.com/microsoft/InnerEye-DeepLearning/blob/1606729c7a16e1bfeb269694314212b6e2737939/InnerEye-DataQuality/README.md)
1. [Using InnerEye as a git submodule](docs/innereye_as_submodule.md)
1. [Evaluating pre-trained models](docs/hippocampus_model.md)

## Deployment
## Full InnerEye Deployment

We offer a companion set of open-sourced tools that help to integrate trained CT segmentation models with clinical
software systems:

- The [InnerEye-Gateway](https://github.com/microsoft/InnerEye-Gateway) is a Windows service running in a DICOM network,
that can route anonymized DICOM images to an inference service.
- The [InnerEye-Inference](https://github.com/microsoft/InnerEye-Inference) component offers a REST API that integrates
with the InnnEye-Gateway, to run inference on InnerEye-DeepLearning models.
with the InnerEye-Gateway, to run inference on InnerEye-DeepLearning models.

Details can be found [here](docs/source/md/deploy_on_aml.md).

Details can be found [here](docs/deploy_on_aml.md).
![docs/deployment.png](docs/source/images/deployment.png)

![docs/deployment.png](docs/deployment.png)
## Benefits of InnerEye-DeepLearning

## More information
In combiniation with the power of AzureML, InnerEye provides the following benefits:

- **Traceability**: AzureML keeps a full record of all experiments that were executed, including a snapshot of the code. Tags are added to the experiments automatically, that can later help filter and find old experiments.
- **Transparency**: All team members have access to each other's experiments and results.
- **Reproducibility**: Two model training runs using the same code and data will result in exactly the same metrics. All sources of randomness are controlled for.
- **Cost reduction**: Using AzureML, all compute resources (virtual machines, VMs) are requested at the time of starting the training job and freed up at the end. Idle VMs will not incur costs. Azure low priority nodes can be used to further reduce costs (up to 80% cheaper).
- **Scalability**: Large numbers of VMs can be requested easily to cope with a burst in jobs.

1. [Project InnerEye](https://www.microsoft.com/en-us/research/project/medical-image-analysis/)
1. [Releases](docs/releases.md)
1. [Changelog](CHANGELOG.md)
1. [Testing](docs/testing.md)
1. [How to do pull requests](docs/pull_requests.md)
1. [Contributing](docs/contributing.md)
Despite the cloud focus, InnerEye is designed to be able to run locally too, which is important for model prototyping, debugging, and in cases where the cloud can't be used. Therefore, if you already have GPU machines available, you will be able to utilize them with the InnerEye toolbox.

## Licensing

Expand Down
Loading