Skip to content
This repository has been archived by the owner on Mar 21, 2024. It is now read-only.

ENH: Move docs folder to sphinx-docs #768

Merged
merged 26 commits into from
Aug 4, 2022
Merged
Show file tree
Hide file tree
Changes from 21 commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
3f49383
📝Move docs folder to sphinx-docs
peterhessey Jul 21, 2022
8f8513d
Trigger build for new URL
peterhessey Jul 21, 2022
3390a4a
📝 Fix build to include README + CHANGLOG
peterhessey Jul 22, 2022
6487d06
📝 Add back in link fixing
peterhessey Jul 22, 2022
da3f4de
🐛 Fix docs links
peterhessey Jul 22, 2022
a319eac
🚨 📝 Fix markdown linting
peterhessey Jul 22, 2022
2591db6
📝 Change relative links to GitHub ones permanently
peterhessey Jul 22, 2022
3d201e2
📝 Replace more relative paths
peterhessey Jul 22, 2022
9fc25a9
⚡️ 📝 Switch to symlinks
peterhessey Jul 25, 2022
e8c6852
📝 Replace README in toctree
peterhessey Jul 25, 2022
e2702b5
📝 Update README
peterhessey Jul 26, 2022
46c1966
🐛 Attempt to fix images not rendering
peterhessey Jul 26, 2022
7f2c2e6
🐛 Fix broken links
peterhessey Jul 26, 2022
8156e79
Remove IDE settings from gitignore
peterhessey Jul 27, 2022
678b616
⚡️ Move docs to `docs/` and add Makefile back
peterhessey Jul 27, 2022
33e3a06
🙈 Update gitignore
peterhessey Jul 27, 2022
27e2893
♻️ ⚡️ Resolve review comments and change theme
peterhessey Jul 28, 2022
07ee50e
📝 🔀 Rebase + markdown linting
peterhessey Aug 2, 2022
686ca06
🔥 Remove build files (again)
peterhessey Aug 2, 2022
cc54f15
🙈 Remove pieline-breaking symlink
peterhessey Aug 2, 2022
11ecd1f
➕ Add furo to sphinx dependencies
peterhessey Aug 2, 2022
5353ae3
📌 Move sphinx deps to environment.yml + lock
peterhessey Aug 3, 2022
f5d3f76
📝 Improve doc folder structure
peterhessey Aug 3, 2022
a30f609
Return to copying instead of symlink
peterhessey Aug 3, 2022
5985fda
📝 Update indexing and titles
peterhessey Aug 3, 2022
efc5f9e
📝 Address review comments
peterhessey Aug 3, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -84,8 +84,8 @@ instance/
.scrapy

# Sphinx documentation
sphinx-docs/build/
sphinx-docs/source/md/
docs/build/
docs/source/docs/CHANGELOG.md

# PyBuilder
target/
Expand Down
4 changes: 2 additions & 2 deletions .readthedocs.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,11 @@ build:
python: miniconda3-4.7

sphinx:
configuration: sphinx-docs/source/conf.py
configuration: docs/source/conf.py
peterhessey marked this conversation as resolved.
Show resolved Hide resolved

python:
install:
- requirements: sphinx-docs/requirements.txt
- requirements: docs/requirements.txt

conda:
environment: environment.yml
90 changes: 18 additions & 72 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,53 +2,26 @@

[![Build Status](https://innereye.visualstudio.com/InnerEye/_apis/build/status/InnerEye-DeepLearning/InnerEye-DeepLearning-PR?branchName=main)](https://innereye.visualstudio.com/InnerEye/_build?definitionId=112&branchName=main)

## Overview

This is a deep learning toolbox to train models on medical images (or more generally, 3D images).
It integrates seamlessly with cloud computing in Azure.

On the modelling side, this toolbox supports

- Segmentation models
- Classification and regression models
- Adding cloud support to any PyTorch Lightning model, via a [bring-your-own-model setup](docs/bring_your_own_model.md)

On the user side, this toolbox focusses on enabling machine learning teams to achieve more. It is cloud-first, and
relies on [Azure Machine Learning Services (AzureML)](https://docs.microsoft.com/en-gb/azure/machine-learning/) for execution,
bookkeeping, and visualization. Taken together, this gives:

- **Traceability**: AzureML keeps a full record of all experiments that were executed, including a snapshot of
the code. Tags are added to the experiments automatically, that can later help filter and find old experiments.
- **Transparency**: All team members have access to each other's experiments and results.
- **Reproducibility**: Two model training runs using the same code and data will result in exactly the same metrics. All
sources of randomness like multithreading are controlled for.
- **Cost reduction**: Using AzureML, all compute (virtual machines, VMs) is requested at the time of starting the
training job, and freed up at the end. Idle VMs will not incur costs. In addition, Azure low priority
nodes can be used to further reduce costs (up to 80% cheaper).
- **Scale out**: Large numbers of VMs can be requested easily to cope with a burst in jobs.

Despite the cloud focus, all training and model testing works just as well on local compute, which is important for
model prototyping, debugging, and in cases where the cloud can't be used. In particular, if you already have GPU
machines available, you will be able to utilize them with the InnerEye toolbox.

In addition, our toolbox supports:

- Cross-validation using AzureML's built-in support, where the models for
individual folds are trained in parallel. This is particularly important for the long-running training jobs
often seen with medical images.
- Hyperparameter tuning using
[Hyperdrive](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters).
InnerEye-DeepLearning (IE-DL) is a toolbox for easily training deep learning models on 3D medical images. Simple to run both locally and in the cloud with AzureML, it allows users to train and run inference on the following:

- Segmentation models.
- Classification and regression models.
- Any PyTorch Lightning model, via a [bring-your-own-model setup](https://innereye-deeplearning.readthedocs.io/docs/bring_your_own_model.html).

In addition, this toolbox supports:

- Cross-validation using AzureML, where the models for individual folds are trained in parallel. This is particularly important for the long-running training jobs often seen with medical images.
- Hyperparameter tuning using [Hyperdrive](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters).
- Building ensemble models.
- Easy creation of new models via a configuration-based approach, and inheritance from an existing
architecture.
- Easy creation of new models via a configuration-based approach, and inheritance from an existing architecture.

Once training in AzureML is done, the models can be deployed from within AzureML.
## Documentation

## Quick Setup
For all documentation, including setup guides and APIs, please refer to the [IE-DL Read the Docs site](https://innereye-deeplearning.readthedocs.io/#).

This quick setup assumes you are using a machine running Ubuntu with Git, Git LFS, Conda and Python 3.7+ installed. Please refer to the [setup guide](docs/environment.md) for more detailed instructions on getting InnerEye set up with other operating systems and installing the above prerequisites.
## Quick Setup

### Instructions
This quick setup assumes you are using a machine running Ubuntu with Git, Git LFS, Conda and Python 3.7+ installed. Please refer to the [setup guide](https://innereye-deeplearning.readthedocs.io/docs/environment.html) for more detailed instructions on getting InnerEye set up with other operating systems and installing the above prerequisites.

1. Clone the InnerEye-DeepLearning repo by running the following command:

Expand All @@ -73,46 +46,19 @@ If the above runs with no errors: Congratulations! You have successfully built y
If it fails, please check the
[troubleshooting page on the Wiki](https://github.com/microsoft/InnerEye-DeepLearning/wiki/Issues-with-code-setup-and-the-HelloWorld-model).

## Other Documentation

Further detailed instructions, including setup in Azure, are here:

1. [Setting up your environment](docs/environment.md)
1. [Setting up Azure Machine Learning](docs/setting_up_aml.md)
1. [Training a simple segmentation model in Azure ML](docs/hello_world_model.md)
1. [Creating a dataset](docs/creating_dataset.md)
1. [Building models in Azure ML](docs/building_models.md)
1. [Sample Segmentation and Classification tasks](docs/sample_tasks.md)
1. [Debugging and monitoring models](docs/debugging_and_monitoring.md)
1. [Model diagnostics](docs/model_diagnostics.md)
1. [Move a model to a different workspace](docs/move_model.md)
1. [Working with FastMRI models](docs/fastmri.md)
1. [Active label cleaning and noise robust learning toolbox](https://github.com/microsoft/InnerEye-DeepLearning/blob/1606729c7a16e1bfeb269694314212b6e2737939/InnerEye-DataQuality/README.md)
1. [Using InnerEye as a git submodule](docs/innereye_as_submodule.md)
1. [Evaluating pre-trained models](docs/hippocampus_model.md)

## Deployment
## Full InnerEye Deployment

We offer a companion set of open-sourced tools that help to integrate trained CT segmentation models with clinical
software systems:

- The [InnerEye-Gateway](https://github.com/microsoft/InnerEye-Gateway) is a Windows service running in a DICOM network,
that can route anonymized DICOM images to an inference service.
- The [InnerEye-Inference](https://github.com/microsoft/InnerEye-Inference) component offers a REST API that integrates
with the InnnEye-Gateway, to run inference on InnerEye-DeepLearning models.
with the InnerEye-Gateway, to run inference on InnerEye-DeepLearning models.

Details can be found [here](docs/deploy_on_aml.md).

![docs/deployment.png](docs/deployment.png)

## More information

1. [Project InnerEye](https://www.microsoft.com/en-us/research/project/medical-image-analysis/)
1. [Releases](docs/releases.md)
1. [Changelog](CHANGELOG.md)
1. [Testing](docs/testing.md)
1. [How to do pull requests](docs/pull_requests.md)
1. [Contributing](docs/contributing.md)
![docs/deployment.png](docs/source/docs/images/deployment.png)

## Licensing

Expand Down
7 changes: 1 addition & 6 deletions sphinx-docs/Makefile → docs/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -14,12 +14,7 @@ help:

.PHONY: help Makefile

# Do some preprocessing, including copying over md files to the source directory so sphinx can find them,
# and changing references to codefiles in md files to urls.
preprocess:
python preprocess.py

# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile preprocess
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
12 changes: 12 additions & 0 deletions docs/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
# Building docs for InnerEye-DeepLearning

1. First, make sure you have all the packages necessary for InnerEye.
1. Install pip dependencies from sphinx-docs/requirements.txt:
peterhessey marked this conversation as resolved.
Show resolved Hide resolved

```shell
pip install -r requirements.txt
```

1. Run `make html` from the `docs` folder. This will create html files under docs/build/html.
1. From the `docs/build/html` folder, run `python -m http.server 8080` to host the docs locally.
1. From your browser, navigate to `https://localhost:8080` to view the documentation.
8 changes: 2 additions & 6 deletions sphinx-docs/make.bat → docs/make.bat
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,6 @@ if "%SPHINXBUILD%" == "" (
set SOURCEDIR=source
set BUILDDIR=build

if "%1" == "" goto help

%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
echo.
Expand All @@ -21,13 +19,11 @@ if errorlevel 9009 (
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.http:https://sphinx-doc.org/
echo.https:https://www.sphinx-doc.org/
exit /b 1
)

REM Do some preprocessing, including copying over md files to the source directory so sphinx can find them,
REM and changing references to codefiles in md files to urls.
python preprocess.py
if "%1" == "" goto help

%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
goto end
Expand Down
5 changes: 3 additions & 2 deletions sphinx-docs/requirements.txt → docs/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
sphinx==5.0.2
sphinx-rtd-theme==1.0.0
furo==2022.6.21
recommonmark==0.7.1
sphinx-rtd-theme==1.0.0
sphinx==5.0.2
26 changes: 25 additions & 1 deletion sphinx-docs/source/conf.py → docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'sphinx_rtd_theme'
html_theme = 'furo'
fepegar marked this conversation as resolved.
Show resolved Hide resolved

# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
Expand All @@ -84,3 +84,27 @@
'members': True,
'undoc-members': True,
}


# -- Copy markdown files to source directory --------------------------------

def replace_in_file(filepath: Path, original_str: str, replace_str: str) -> None:
"""
Replace all occurences of the original_str with replace_str in the file provided.
"""
text = filepath.read_text()
text = text.replace(original_str, replace_str)
filepath.write_text(text)


sphinx_root = Path(__file__).absolute().parent
docs_path = Path(sphinx_root / "docs")
repository_root = sphinx_root.parent.parent

# Symlink to all files that are in the head of the repository
files_to_symlink = ["CHANGELOG.md"]
for file_to_symlink in files_to_symlink:
symlink_path = docs_path / file_to_symlink
if not symlink_path.exists():
target_path = repository_root / file_to_symlink
symlink_path.symlink_to(target_path)
17 changes: 9 additions & 8 deletions docs/WSL.md → docs/source/docs/WSL.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,8 @@ Subsystem for Linux (WSL2) or a plain Ubuntu Linux box.
If you are running a Windows box with a GPU, please follow the documentation
[here](https://docs.microsoft.com/en-us/windows/win32/direct3d12/gpu-cuda-in-wsl) to access the GPU from within WSL2.

You can also find a video walkthrough of WSL2+CUDA installation
here: https://channel9.msdn.com/Shows/Tabs-vs-Spaces/GPU-Accelerated-Machine-Learning-with-WSL-2
There is also a video walkthrough of WSL2+CUDA installation:
[GPU Accelerated Machine Learning with WSL 2](https://channel9.msdn.com/Shows/Tabs-vs-Spaces/GPU-Accelerated-Machine-Learning-with-WSL-2).

## Install WSL2

Expand All @@ -27,7 +27,8 @@ To use the commandline setup, please first install
Optionally, restart your machine.

In PowerShell as Administrator type:
```

```shell
wsl --install
```

Expand All @@ -38,7 +39,7 @@ installed, ensure that your distribution is running on top of WSL2 by executing
`wsl --list --verbose`
If all is good, the output should look like this:

```
```shell
$> wsl --list -v
NAME STATE VERSION
* Ubuntu-20.04 Running 2
Expand All @@ -63,17 +64,17 @@ Start the Windows Terminal app, create an Ubuntu tab. In the shell, run the foll
- Create conda environment: `conda env create --file environment.yml`
- Clean your pyc files (in case you have some left from Windows):

```
```shell
find * -name '*.pyc' | xargs -d'\n' rm`
```

## Configure PyCharm

- https://www.jetbrains.com/help/pycharm/using-wsl-as-a-remote-interpreter.html
- [Instructions for using WSL as a remote interpreter](https://www.jetbrains.com/help/pycharm/using-wsl-as-a-remote-interpreter.html)
- You might need to reset all your firewall settings to make the debugger work with PyCharm. This can be done with these
PowerShell commands (as Administrator):

```
```shell
$myIp = (Ubuntu2004 run "cat /etc/resolv.conf | grep nameserver | cut -d' ' -f2")
New-NetFirewallRule -DisplayName "WSL" -Direction Inbound -LocalAddress $myIp -Action Allow
```
Expand All @@ -86,4 +87,4 @@ New-NetFirewallRule -DisplayName "WSL" -Direction Inbound -LocalAddress $myIp -

## Configure VSCode

- https://code.visualstudio.com/docs/remote/wsl
- [Instructions for configuring WSL in VSCode](https://code.visualstudio.com/docs/remote/wsl)
Loading