Skip to content

Commit

Permalink
Update documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
Joseph Marino committed Apr 4, 2022
1 parent 914462d commit b504018
Show file tree
Hide file tree
Showing 10 changed files with 192 additions and 112 deletions.
9 changes: 4 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ pathologists' semi-quantitative scoring.*

© This code is made available for non-commercial academic purposes.

![overview_image](./images/overview.png)**Figure 1**. *Overview of DeepLIIF pipeline and sample input IHCs (different
![overview_image](./images/overview.png)*Overview of DeepLIIF pipeline and sample input IHCs (different
brown/DAB markers -- BCL2, BCL6, CD10, CD3/CD8, Ki67) with corresponding DeepLIIF-generated hematoxylin/mpIF modalities
and classified (positive (red) and negative (blue) cell) segmentation masks. (a) Overview of DeepLIIF. Given an IHC
input, our multitask deep learning framework simultaneously infers corresponding Hematoxylin channel, mpIF DAPI, mpIF
Expand All @@ -45,7 +45,7 @@ represent negative cells (blue cells in the input IHC). (b) Example DeepLIIF-gen
segmentation masks for different IHC markers. DeepLIIF, trained on clean IHC Ki67 nuclear marker images, can generalize
to noisier as well as other IHC nuclear/cytoplasmic marker images.*

## Pre-requisites
## Prerequisites
1. Python 3.8
2. Docker

Expand Down Expand Up @@ -73,7 +73,6 @@ Options:
Commands:
prepare-testing-data Preparing data for testing
prepare-training-data Preparing data for training
serialize Serialize DeepLIIF models using Torchscript
test Test trained models
train General-purpose training script for multi-task...
Expand Down Expand Up @@ -285,9 +284,9 @@ on the type of the cell (positive cell: a brown hue, negative: a blue hue).
In the next step, we generate synthetic IHC images with more clustered positive cells. To do so, we change the
segmentation mask by choosing a percentage of random negative cells in the segmentation mask (called as Neg-to-Pos) and
converting them into positive cells. Some samples of the synthesized IHC images along with the original IHC image are
shown in Figure 2.
shown below.

![IHC_Gen_image](docs/development/images/IHC_Gen.jpg)**Figure 2**. *Overview of synthetic IHC image generation. (a) A training sample
![IHC_Gen_image](docs/development/images/IHC_Gen.jpg)*Overview of synthetic IHC image generation. (a) A training sample
of the IHC-generator model. (b) Some samples of synthesized IHC images using the trained IHC-Generator model. The
Neg-to-Pos shows the percentage of the negative cells in the segmentation mask converted to positive cells.*

Expand Down
20 changes: 10 additions & 10 deletions docs/ImageJ/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,43 +13,43 @@ This is a plugin for ImageJ which allows users to easily submit images to [DeepL

1. Open an image file in ImageJ.

![Step 1](images/step01.png)
![Step 1](images/step01.png)

2. If desired, select a region of interest to process. Otherwise, the entire image will be processed. (Note: DeepLIIF currently has a limit on image dimensions of 3000 x 3000 pixels.)

![Step 2](images/step02.png)
![Step 2](images/step02.png)

3. Navigate to the `Plugins > DeepLIIF > Submit Image to DeepLIIF` menu item.

![Step 3](images/step03.png)
![Step 3](images/step03.png)

4. Choose the resolution/magnification of your image (`10x`, `20x`, or `40x`) and click `OK`.

![Step 4](images/step04.png)
![Step 4](images/step04.png)

5. The image will be sent to the DeepLIIF server for processing. This can take several seconds or more, depending on the image size.

![Step 5](images/step05.png)
![Step 5](images/step05.png)

6. The resulting inferred images and IHC scoring will be stored in a folder in the same directory as the original image. This folder is numbered, so that multiple runs on the same image (or regions of the image) will not overwrite previous results. The classification overlay image and IHC scores are displayed.

![Step 6](images/step06.png)
![Step 6](images/step06.png)

7. If desired, interactive adjustment can be performed on this result before the image or score windows are closed. Navigate to the `Plugins > DeepLIIF > Adjust DeepLIIF Results` menu item.

![Step 7](images/step07.png)
![Step 7](images/step07.png)

8. Adjust the two sliders to change to segmentation threshold and size gating as desired. As the sliders are adjusted, the image will update to preview the results. The segmentation threshold adjusts how the generated probability map is used to classify the pixels as positive or negative. Size gating allows smaller cells to be omitted from the final results.

![Step 8](images/step08.png)
![Step 8](images/step08.png)

9. When satisfied with the settings, click `OK` to send the images to the DeepLIIF server for processing. This can take several seconds or more, depending on the image size.

![Step 9](images/step05.png)
![Step 9](images/step05.png)

10. The updated classification images and IHC scoring are written to their corresponding files. The updated classification overlay image and IHC scores are displayed.

![Step 10](images/step10.png)
![Step 10](images/step10.png)

11. Further adjustments can be made if desired, repeating steps 7-10, until the result image and score windows are closed.

Expand Down
6 changes: 3 additions & 3 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@
<p align="center">
<a href="https://doi.org/10.1101/2021.05.01.442219">Read Link</a>
|
<a href="https://deepliif.org/">AWS Cloud Deployment</a>
<a href="https://deepliif.org/">Cloud Deployment</a>
|
<a href="#docker-file">Docker</a>
<a href="deployment/#docker">Docker</a>
|
<a href="https://github.com/nadeemlab/DeepLIIF/issues">Report Bug</a>
|
Expand All @@ -32,7 +32,7 @@ pathologists' semi-quantitative scoring.*

© This code is made available for non-commercial academic purposes.

![overview_image](./images/overview.png)**Figure 1**. *Overview of DeepLIIF pipeline and sample input IHCs (different
![overview_image](./images/overview.png)*Overview of DeepLIIF pipeline and sample input IHCs (different
brown/DAB markers -- BCL2, BCL6, CD10, CD3/CD8, Ki67) with corresponding DeepLIIF-generated hematoxylin/mpIF modalities
and classified (positive (red) and negative (blue) cell) segmentation masks. (a) Overview of DeepLIIF. Given an IHC
input, our multitask deep learning framework simultaneously infers corresponding Hematoxylin channel, mpIF DAPI, mpIF
Expand Down
62 changes: 62 additions & 0 deletions docs/cloud/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
# Cloud Deployment
If you don't have access to GPU or appropriate hardware and don't want to install ImageJ, we have also created a [cloud-native DeepLIIF deployment](https://deepliif.org) with a user-friendly interface to upload images, visualize, interact, and download the final results.

DeepLIIF can also be accessed programmatically through an endpoint by posting a multipart-encoded request
containing the original image file:

```
POST /api/infer
Parameters
img (required)
file: image to run the models on
resolution
string: resolution used to scan the slide (10x, 20x, 40x), defaults to 20x
pil
boolean: if true, use PIL.Image.open() to laod the image, instead of python-bioformats
slim
boolean: if true, return only the segmentation result image
```

For example, in Python:

```python
import os
import json
import base64
from io import BytesIO

import requests
from PIL import Image

# Use the sample images from the main DeepLIIF repo
images_dir = './Sample_Large_Tissues'
filename = 'ROI_1.png'

res = requests.post(
url='https://deepliif.org/api/infer',
files={
'img': open(f'{images_dir}/{filename}', 'rb')
},
# optional param that can be 10x, 20x (default) or 40x
params={
'resolution': '20x'
}
)

data = res.json()

def b64_to_pil(b):
return Image.open(BytesIO(base64.b64decode(b.encode())))

for name, img in data['images'].items():
output_filepath = f'{images_dir}/{os.path.splitext(filename)[0]}_{name}.png'
with open(output_filepath, 'wb') as f:
b64_to_pil(img).save(f, format='PNG')

print(json.dumps(data['scoring'], indent=2))
```
21 changes: 13 additions & 8 deletions docs/deployment/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
We provide a Dockerfile that can be used to run the DeepLIIF models inside a container.
First, you need to install the [Docker Engine](https://docs.docker.com/engine/install/ubuntu/).
After installing the Docker, you need to follow these steps:

* Download the pretrained model and place them in DeepLIIF/checkpoints/DeepLIIF_Latest_Model.
* Change XXX of the **WORKDIR** line in the **DockerFile** to the directory containing the DeepLIIF project.
* To create a docker image from the docker file:
Expand All @@ -10,6 +11,7 @@ docker build -t cuda/deepliif .
```
The image is then used as a base. You can copy and use it to run an application. The application needs an isolated
environment in which to run, referred to as a container.

* To create and run a container:
```
docker run -it -v `pwd`:`pwd` -w `pwd` cuda/deepliif deepliif test --input-dir Sample_Large_Tissues
Expand All @@ -22,27 +24,30 @@ You can easily run any CLI command in the activated environment and copy the res
This section describes how to run DeepLIIF's inference using [Torchserve](https://github.com/pytorch/serve) workflows.
Workflows con be composed by both PyTorch models and Python functions that can be connected through a DAG.
For DeepLIIF there are 4 main stages (see Figure 3):

* `Pre-process` deserialize the image from the request and return a tensor created from it.
* `G1-4` run the ResNets to generate the Hematoxylin, DAPI, LAP2 and Ki67 masks.
* `G51-5` run the UNets and apply `Weighted Average` to generate the Segmentation image.
* `Aggregate` aggregate and serialize the results and return to user.

![DeepLIIF Torchserve workflow](./images/deepliif_torchserve_workflow.png)
**Figure 3**. *Composition of DeepLIIF nets into a Torchserve workflow*
*Composition of DeepLIIF nets into a Torchserve workflow.*

In practice, users need to call this workflow for each tile generated from the original image.
A common use case scenario would be:

1. Load an IHC image and generate the tiles.
3. For each tile
2. For each tile:
1. Resize to 512x512 and transform to tensor.
2. Serialize the tensor and use the inference API to generate all the masks
3. Deserialize the results
4. Stitch back the results and apply post-processing operations
2. Serialize the tensor and use the inference API to generate all the masks.
3. Deserialize the results.
3. Stitch back the results and apply post-processing operations.

The next sections show how to deploy the model server.

### Pre-requisites
1. Install Torchserve and torch-model-archiver following [these instructions](https://github.com/pytorch/serve#install-torchserve-and-torch-model-archiver).
### Prerequisites

1\. Install Torchserve and torch-model-archiver following [these instructions](https://github.com/pytorch/serve#install-torchserve-and-torch-model-archiver).
In MacOS, navigate to the `model-server` directory:

```shell
Expand All @@ -52,7 +57,7 @@ source venv/bin/activate
pip install torch torchserve torch-model-archiver torch-workflow-archiver
```

2. Download and unzip the latest version of the DeepLIIF models from [zenodo](https://zenodo.org/record/4751737#.YXsTuS2cZhF).
2\. Download and unzip the latest version of the DeepLIIF models from [zenodo](https://zenodo.org/record/4751737#.YXsTuS2cZhF).

```shell
wget https://zenodo.org/record/4751737/files/DeepLIIF_Latest_Model.zip
Expand Down
4 changes: 2 additions & 2 deletions docs/installation/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Installation

## Pre-requisites
## Prerequisites
1. Python 3.8
2. Docker

Expand All @@ -14,6 +14,7 @@ $ source venv/bin/activate
```

The package is composed of two parts:

1. A library that implements the core functions used to train and test DeepLIIF models.
2. A CLI to run common batch operations including training, batch testing and Torchscipt models serialization.

Expand All @@ -27,7 +28,6 @@ Options:
--help Show this message and exit.
Commands:
prepare-testing-data Preparing data for testing
prepare-training-data Preparing data for training
serialize Serialize DeepLIIF models using Torchscript
test Test trained models
Expand Down
29 changes: 29 additions & 0 deletions docs/testing/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# Testing

## Serialize Model
The installed `deepliif` uses Dask to perform inference on the input IHC images.
Before running the `test` command, the model files must be serialized using Torchscript.
To serialize the model files:
```
deepliif serialize --models-dir /path/to/input/model/files
--output-dir /path/to/output/model/files
```
* By default, the model files are expected to be located in `DeepLIIF/model-server/DeepLIIF_Latest_Model`.
* By default, the serialized files will be saved to the same directory as the input model files.

## Testing
To test the model:
```
deepliif test --input-dir /path/to/input/images
--output-dir /path/to/output/images
--tile-size 512
```
* The latest version of the pretrained models can be downloaded [here](https://zenodo.org/record/4751737#.YKRTS0NKhH4).
* Before running test on images, the model files must be serialized as described above.
* The serialized model files are expected to be located in `DeepLIIF/model-server/DeepLIIF_Latest_Model`.
* The test results will be saved to the specified output directory, which defaults to the input directory.
* The default tile size is 512.
* Testing datasets can be downloaded [here](https://zenodo.org/record/4751737#.YKRTS0NKhH4).

If you prefer, it is possible to run the model using Torchserve.
Please see below for instructions on how to deploy the model with Torchserve and for an example of how to run the inference.
Loading

0 comments on commit b504018

Please sign in to comment.