Skip to content
This repository has been archived by the owner on Mar 21, 2024. It is now read-only.

Commit

Permalink
DOC: Add part of the API to the Sphinx docs (#718)
Browse files Browse the repository at this point in the history
* Add part of the API to the Sphinx docs

* Improve layout of landing page

* Remove redundant caption

* Include all documentation in Sphinx and fix docstrings

* Remove redundant options
  • Loading branch information
fepegar committed Apr 19, 2022
1 parent c5b16e5 commit a15a1a2
Show file tree
Hide file tree
Showing 13 changed files with 74 additions and 19 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -114,8 +114,9 @@ def random_crop(sample: Sample,
:param class_weights: A weighting vector with values [0, 1] to influence the class the center crop
voxel belongs to (must sum to 1), uniform distribution assumed if none provided.
:return: Tuple item 1: The cropped images, labels, and mask. Tuple item 2: The center that was chosen for the crop,
before shifting to be inside of the image. Tuple item 3: The slicers that convert the input image to the chosen
crop.
before shifting to be inside of the image. Tuple item 3: The slicers that convert the input image to the chosen
crop.
:raises ValueError: If there are shape mismatches among the arguments or if the crop size is larger than the image.
"""
slicers, center = slicers_for_random_crop(sample, crop_size, class_weights)
Expand Down
1 change: 1 addition & 0 deletions InnerEye/ML/augmentations/image_transforms.py
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,7 @@ def __call__(self, data: torch.Tensor) -> torch.Tensor:

class ElasticTransform:
"""Elastic deformation of images as described in [Simard2003]_.
.. [Simard2003] Simard, Steinkraus and Platt, "Best Practices for
Convolutional Neural Networks applied to Visual Document Analysis", in
Proc. of the International Conference on Document Analysis and
Expand Down
1 change: 1 addition & 0 deletions InnerEye/ML/pipelines/inference.py
Original file line number Diff line number Diff line change
Expand Up @@ -239,6 +239,7 @@ def predict_whole_image(self, image_channels: np.ndarray,
patient_id: int = 0) -> InferencePipeline.Result:
"""
Performs a single inference pass through the pipeline for the provided image
:param image_channels: The input image channels to perform inference on in format: Channels x Z x Y x X.
:param voxel_spacing_mm: Voxel spacing to use for each dimension in (Z x Y x X) order
:param mask: A binary image used to ignore results outside it in format: Z x Y x X.
Expand Down
17 changes: 10 additions & 7 deletions InnerEye/ML/pipelines/scalar_inference.py
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,7 @@ def create_from_checkpoint(path_to_checkpoint: Path,
pipeline_id: int = 0) -> Optional[ScalarInferencePipeline]:
"""
Creates an inference pipeline from a single checkpoint.
:param path_to_checkpoint: Path to the checkpoint to recover.
:param config: Model configuration information.
:param pipeline_id: ID for the pipeline to be created.
Expand All @@ -95,11 +96,12 @@ def create_from_checkpoint(path_to_checkpoint: Path,
def predict(self, sample: Dict[str, Any]) -> ScalarInferencePipelineBase.Result:
"""
Runs the forward pass on a single batch.
:param sample: Single batch of input data.
In the form of a dict containing at least the fields:
metadata, label, images, numerical_non_image_features,
categorical_non_image_features and segmentations.
:return: Returns ScalarInferencePipelineBase.Result with the subject ids, ground truth labels and predictions.
In the form of a dictionary containing at least the fields:
metadata, label, images, numerical_non_image_features,
categorical_non_image_features and segmentations.
:return: ScalarInferencePipelineBase.Result with the subject ids, ground truth labels and predictions.
"""
assert isinstance(self.model_config, ScalarModelBase)
model_inputs_and_labels = get_scalar_model_inputs_and_labels(self.model.model,
Expand Down Expand Up @@ -158,10 +160,11 @@ def predict(self, sample: Dict[str, Any]) -> ScalarInferencePipelineBase.Result:
"""
Performs inference on a single batch. First does the forward pass on all of the single inference pipelines,
and then aggregates the results.
:param sample: single batch of input data.
In the form of a dict containing at least the fields:
metadata, label, images, numerical_non_image_features,
categorical_non_image_features and segmentations.
In the form of a dictionary containing at least the fields:
metadata, label, images, numerical_non_image_features,
categorical_non_image_features and segmentations.
:return: Returns ScalarInferencePipelineBase.Result with the subject ids, ground truth labels and predictions.
"""
results = [pipeline.predict(sample) for pipeline in self.pipelines]
Expand Down
7 changes: 7 additions & 0 deletions sphinx-docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -77,3 +77,10 @@
'.rst': 'restructuredtext',
'.md': 'markdown',
}

# Autodoc options

autodoc_default_options = {
'members': True,
'undoc-members': True,
}
23 changes: 15 additions & 8 deletions sphinx-docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@ InnerEye-DeepLearning Documentation

.. toctree::
:maxdepth: 1
:caption: Contents

md/README.md
md/docs/WSL.md
Expand All @@ -19,12 +18,6 @@ InnerEye-DeepLearning Documentation
md/docs/sample_tasks.md
md/docs/debugging_and_monitoring.md

.. toctree::
:maxdepth: 1
:caption: About Model Configs

rst/configs.rst

.. toctree::
:maxdepth: 1
:caption: Further reading for contributors
Expand All @@ -33,11 +26,25 @@ InnerEye-DeepLearning Documentation
md/docs/testing.md
md/docs/contributing.md

md/docs/hello_world_model.md
md/docs/deploy_on_aml.md
md/docs/bring_your_own_model.md
md/docs/fastmri.md
md/docs/innereye_as_submodule.md
md/docs/model_diagnostics.md
md/docs/move_model.md
md/docs/releases.md
md/docs/self_supervised_models.md
md/CHANGELOG.md

.. toctree::
:caption: API documentation

rst/api/index


Indices and tables
==================

* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
8 changes: 8 additions & 0 deletions sphinx-docs/source/rst/api/ML/augmentations.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
Data augmentation
=================

.. automodule:: InnerEye.ML.augmentations.augmentation_for_segmentation_utils

.. automodule:: InnerEye.ML.augmentations.image_transforms

.. automodule:: InnerEye.ML.augmentations.transform_pipeline
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,4 @@ Segmentation Model Configuration
.. autoattribute:: is_plotting_enabled

.. automodule:: InnerEye.ML.config
:members:
:undoc-members:
:exclude-members: SegmentationModelBase
10 changes: 10 additions & 0 deletions sphinx-docs/source/rst/api/ML/index.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
Machine learning
================

.. toctree::

configs
runner
augmentations
photometric_normalization
pipelines
4 changes: 4 additions & 0 deletions sphinx-docs/source/rst/api/ML/photometric_normalization.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
Photometric normalization
=========================

.. automodule:: InnerEye.ML.photometric_normalization
8 changes: 8 additions & 0 deletions sphinx-docs/source/rst/api/ML/pipelines.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
Pipelines
=========

.. automodule:: InnerEye.ML.pipelines.inference

.. automodule:: InnerEye.ML.pipelines.ensemble

.. automodule:: InnerEye.ML.pipelines.scalar_inference
4 changes: 4 additions & 0 deletions sphinx-docs/source/rst/api/ML/runner.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
Runner
======

.. automodule:: InnerEye.ML.runner
3 changes: 3 additions & 0 deletions sphinx-docs/source/rst/api/index.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
.. toctree::

ML/index

0 comments on commit a15a1a2

Please sign in to comment.