Skip to content

Nuance PIN MONAI Integration Example App #328

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 44 commits into from
Sep 21, 2022
Merged
Show file tree
Hide file tree
Changes from 7 commits
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
e902820
add example of Nuance PIN integration
Aug 12, 2022
e3dcc8c
adding lib folder for Nuance PIN wheels
Aug 15, 2022
0a99b55
updates for functional container
Aug 16, 2022
fbe2df8
readme header
Aug 16, 2022
8448dae
checking note div capabilities
Aug 16, 2022
506418a
completing docs
Aug 16, 2022
b92db78
format pass
Aug 16, 2022
7634f20
Update examples/integrations/nuance_pin/app/spleen_seg.py
aihsani Aug 26, 2022
d1b2144
Update examples/integrations/nuance_pin/README.md
aihsani Aug 26, 2022
061701c
Update examples/integrations/nuance_pin/README.md
aihsani Aug 26, 2022
c11291c
Update examples/integrations/nuance_pin/README.md
aihsani Aug 26, 2022
d8c5365
CPB PR comments
Aug 26, 2022
95a3125
adding inference operator
Sep 13, 2022
d2e30cc
make covid lesion segmentation operator
Sep 13, 2022
ada742d
updating application to detection use case
Sep 16, 2022
c2e942e
using proper retina net inferer
Sep 17, 2022
9fa3ed5
bug fix
Sep 17, 2022
598916f
updating selection rules for lidc dataset
Sep 17, 2022
0115528
setup output with domain-specific object
Sep 17, 2022
5e210b1
renaming app to lung nodule and creating post-inference operators
Sep 19, 2022
0b5810f
updating IO bindings
Sep 19, 2022
55cc326
name updates, and updates flow to generate GSPS
Sep 19, 2022
8d13e32
updates gsps
Sep 19, 2022
58e9e50
map boxes to original dataset
Sep 19, 2022
0de86da
use highdicom from dicom annotations
Sep 19, 2022
e5513c1
sw batch size adjustment
Sep 19, 2022
eaf8175
adjust gsps annotations
Sep 20, 2022
2c91c8e
refactoring
Sep 20, 2022
d1c2a85
removing
Sep 20, 2022
1ce48d7
adding Nuance PIN report generation
Sep 20, 2022
9119eb0
automatically download model into container
Sep 20, 2022
f83fc51
fix numpy
Sep 20, 2022
8164767
fix base app name
Sep 20, 2022
bf24dcd
formatting
Sep 20, 2022
b8896aa
moving to top level and updating documentation
Sep 20, 2022
1df573a
temporary comment
Sep 20, 2022
c167615
fromatting
Sep 20, 2022
7668570
updates to get good results on LIDC data
Sep 20, 2022
2f9b4e3
format fixes
Sep 20, 2022
dc7ccb6
update app and service name
Sep 21, 2022
a646ec6
unused transforms
Sep 21, 2022
592be4a
formatting fixes
Sep 21, 2022
0e91be7
formatting
Sep 21, 2022
15a36fb
misspell
Sep 21, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -137,3 +137,6 @@ output
# Sphinx temporary files
docs/notebooks
_autosummary

# model files
*.ts
4 changes: 4 additions & 0 deletions examples/integrations/nuance_pin/.dockerignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
Dockerfile*
docker-compose.yml
README.md
README*
59 changes: 59 additions & 0 deletions examples/integrations/nuance_pin/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
FROM nvcr.io/nvidia/pytorch:21.07-py3 AS application

ARG PARTNER_NAME
ARG SERVICE_NAME
ARG VERSION
ARG MONAI_APP_MODULE
ARG MODEL_PATH
ARG EXTRA_PYTHON_PACKAGES

ENV TZ=Etc/UTC
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone

# python3-gdcm or python-gdcm is required for decompression
RUN apt-get -y update && \
apt-get -y install --no-install-recommends python3-distutils python3-gdcm && \
# apt-get -y install python3.7 && \
apt-get autoclean && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*

ENV DEBUG=YES
ENV KEEP_FILES=YES

# make sure all messages reach the console
ENV PYTHONUNBUFFERED=1

# copy and activate virtualenv
# ENV VIRTUAL_ENV=/app/venv
# COPY --from=foundation /opt/venv "${VIRTUAL_ENV}"
# RUN . /opt/env/bin/activate
# ENV PATH="${VIRTUAL_ENV}/bin:${PATH}"

# copy MONAI app files
COPY . /app/.
WORKDIR /app

# non-root aiserviceuser in group aiserviceuser with UserID and GroupID as 20225
RUN groupadd -g 20225 -r aiserviceuser && useradd -u 20225 -r -g aiserviceuser aiserviceuser && chown -R aiserviceuser:aiserviceuser /app && \
chown -R aiserviceuser:aiserviceuser /var
USER aiserviceuser:aiserviceuser

ENV VIRTUAL_ENV=.venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"

RUN python -m pip install --upgrade pip && \
python -m pip install --upgrade --no-cache-dir ${EXTRA_PYTHON_PACKAGES} -r requirements.txt && \
python -m pip install --upgrade --no-cache-dir lib/ai_service-*-py3-none-any.whl && \
rm -rf lib && \
rm requirements.txt

ENV AI_PARTNER_NAME ${PARTNER_NAME}
ENV AI_SVC_NAME ${SERVICE_NAME}
ENV AI_SVC_VERSION ${VERSION}
ENV AI_MODEL_PATH ${MODEL_PATH}
ENV MONAI_APP_CLASSPATH ${MONAI_APP_MODULE}

ENV PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python
CMD ["python", "app_wrapper.py"]
186 changes: 186 additions & 0 deletions examples/integrations/nuance_pin/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,186 @@
# Running MONAI Apps in Nuance PIN

MONAI Deploy Apps can be deployed as Nuance PIN applications with minimal effort and near-zero coding.

This folder includes an example MONAI app, AI-based Spleen Segmentation, which is wrapped in the Nuance PIN API.
The Nuance PIN wrapper code allows MONAI app developer in most cases to deploy their existing MONAI apps in Nuance
without code changes.

## Prerequisites

Before setting up and running the example MONAI spleen segmentation app to run as a Nuance PIN App, the use will need to install/download the following libraries.
It is optional to use a GPU for the example app, however, it is recommended that a GPU is used for inference.

Minimum software requirements:
- [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
- [NVIDIA Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#pre-requisites)
- [Docker Compose](https://docs.docker.com/compose/install/)
- [Nuance PIN SDK](https://www.nuance.com/healthcare/diagnostics-solutions/precision-imaging-network.html)

> **Note**: Nuance PIN SDK does not require host installation to make the example app work. We will explore options in the [Quickstart](#quickstart) section

## Quickstart

This integration example already contains the AI Spleen segmentation code which is an exact copy of the code found under `examples/apps/ai_spleen_seg_app`. However, to make the example work properly we need to download the spleen segmentation model, and the data for local testing.

If you are reading this guide on the MONAI Github repo, you will need to clone the MONAI repo and change the directory to the Nuance PIN integration path.
```bash
git clone https://github.com/Project-MONAI/monai-deploy-app-sdk.git
cd examples/integrations/nuance_pin
```

In this folder you will see the following directory structure
```bash
nuance_pin
├── app # directory with MONAI app code
├── lib # directory where we will place Nuance PIN wheels
├── model # directory where we will place the model used by our MONAI app
├── app_wrapper.py # Nuance PIN wrapper code
├── docker-compose.yml # docker compose runtime script
├── Dockerfile # container image build script
├── README.md # this README
└── requirements.txt # libraries required for the example integration to work
```

We will place the spleen segmentation mode in the `nuance_pin/model` folder and use that as the location for the code in `app/spleen_seg.py`, however,
this is not a hard restriction. The developer may choose a location of their own within the `nuance_pin` subtree, but this change requires updating the
`MODEL_PATH` variable in `docker-compose.yml`.

### Downloading Data and Model for Spleen Segmentation

To download the spleen model and test data you may follow the instructions in the MONAI Deploy [documentation](https://docs.monai.io/projects/monai-deploy-app-sdk/en/latest/getting_started/tutorials/03_segmentation_app.html#executing-from-shell). The steps are also summarized below:

```bash
# choose a download directory outside of the integration folder
pushd ~/Downloads

# install gdown
pip install gdown

# download spleen data and model
gdown https://drive.google.com/uc?id=1cJq0iQh_yzYIxVElSlVa141aEmHZADJh

# After downloading ai_spleen_bundle_data.zip from the web browser or using gdown,
unzip -o ai_spleen_bundle_data.zip

popd

# move the spleen model from the download directory to the integration folder model directory
mv ~/Downloads/model.ts model/.
```

Next we must place the Nuance PIN `ai_service` wheel in the `nuance_pin/lib` folder. This would have been obtained
in step 3 of of the [prerequisites](#prerequisites).

### Running the Example App in the Container

Now we are ready to build and start the container that runs our MONAI app as a Nuance service.
```bash
docker-compose up --build
```

If the build is successful the a service will start on `localhost:5000`. We can verify the service is running
by issuing a "live" request such as
```bash
curl -v http://localhost:5000/aiservice/2/live && echo ""
```
The issued command should return the developer, app, and version of the deployed example app.

Now we can run the example app with the example spleen data as the payload using Nuance PIN AI Service Test
(`AiSvcTest`) utility obtained with the Nuance PIN SDK.
```bash
# create a virtual environment and activate it
python3 -m venv /opt/venv
. /opt/venv/bin/activate

# install AiSvcTest
pip install AiSvcTest-<version>-py3-none-any.whl

# create an output directory for the inference results
mkdir -p ~/Downloads/dcm/out

# run AiSvcTest with spleen dicom payload
python -m AiSvcTest -i ~/Downloads/dcm -o ~/Downloads/dcm/out -s http://localhost:5000 -V 2 -k
```

### Running the Example App on the Host

Alternatively the user may choose to run the Nuance PIn service directly on the host. For this we must install the following:
- Nuance PIN AI Serivce libraries
- Libraries in the `requirements.txt`

```bash
# create a virtual environment and activate it
python3 -m venv /opt/venv
. /opt/venv/bin/activate

# install Nuance Ai Service
pip install ai_service-<version>-py3-none-any.whl

# install requirements
pip install -r requirements.txt

# run the service
export AI_PARTNER_NAME=NVIDIA
export AI_SVC_NAME=ai_spleen_seg_app
export AI_SVC_VERSION=0.1.0
export AI_MODEL_PATH=model/model.ts
export MONAI_APP_CLASSPATH=app.spleen_seg.AISpleenSegApp
export PYTHONPATH=$PYTHONPATH:.
python app_wrapper.py
```

Now we can issue a "live" request to check whether the service is running
```bash
curl -v http://localhost:5000/aiservice/2/live && echo ""
```
As we did in the last section, we can now run the example app with the example spleen data as the payload using Nuance PIN AI Service Test
(`AiSvcTest`) utility obtained with the Nuance PIN SDK.
```bash
. /opt/venv/bin/activate

# install AiSvcTest
pip install AiSvcTest-<version>-py3-none-any.whl

# create an output directory for the inference results
mkdir -p ~/Downloads/dcm/out

# run AiSvcTest with spleen dicom payload
python -m AiSvcTest -i ~/Downloads/dcm -o ~/Downloads/dcm/out -s http://localhost:5000 -V 2 -k
```

### Bring Your Own MONAI App

This example integration may be modified to fit any existing MONAI app, however, they may be caveats.

Nuance PIN requires all artifacts present in the output folder to be also added into the `resultsManifest.json` output file
to consider the run successful. To see what this means in practical terms, check the `resultManifest.json` output from the
example app we ran the in previous sections. You will notice an entry in `resultManifest.json` that corresponds to the DICOM
SEG output generated by the underlying MONAI app
```json
"study": {
"uid": "1.2.826.0.1.3680043.2.1125.1.67295333199898911264201812221946213",
"artifacts": [],
"series": [
{
"uid": "1.2.826.0.1.3680043.2.1125.1.67295333199898911264201812221946213",
"artifacts": [
{
"documentType": "application/dicom",
"groupCode": "default",
"name": "dicom_seg-DICOMSEG.dcm",
"trackingUids": []
}
]
}
]
},
```
This entry is generated by `app_wrapper.py`, which takes care of adding any DICOM present in the output folder in the `resultManifest.json`
to ensure that existing MONAI apps complete successfully when deployed in Nuance. In general, however, the developer may need to tailor some
of the code in `app_wrapper.py` to provide more insight to Nuance's network, such as adding findings, conclusions, etc. and generating more insight
using SNOMED codes. All of this is handled within the Nuance PIN SDK libraries - for more information please consult Nuance PIN [documentation](https://www.nuance.com/healthcare/diagnostics-solutions/precision-imaging-network.html).

In simpler cases, the developer will need to place their code and model under `nuance_pin`. Placing the model under `model` is optional as the model may be placed
anywhere where the code under `app` can access it, however, considerations must be taken when needing to deploy the model inside a container image. The MONAI app code
is placed in `app` and structured as a small Python project.
10 changes: 10 additions & 0 deletions examples/integrations/nuance_pin/app/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# Copyright 2021-2022 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
114 changes: 114 additions & 0 deletions examples/integrations/nuance_pin/app/spleen_seg.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,114 @@
# Copyright 2021-2022 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import logging

from monai.deploy.core import Application, resource
from monai.deploy.core.domain import Image
from monai.deploy.core.io_type import IOType
from monai.deploy.operators.dicom_data_loader_operator import DICOMDataLoaderOperator
from monai.deploy.operators.dicom_seg_writer_operator import DICOMSegmentationWriterOperator
from monai.deploy.operators.dicom_series_selector_operator import DICOMSeriesSelectorOperator
from monai.deploy.operators.dicom_series_to_volume_operator import DICOMSeriesToVolumeOperator
from monai.deploy.operators.monai_bundle_inference_operator import IOMapping, MonaiBundleInferenceOperator

# from monai.deploy.operators.stl_conversion_operator import STLConversionOperator # import as needed.


@resource(cpu=1, gpu=1, memory="7Gi")
# pip_packages can be a string that is a path(str) to requirements.txt file or a list of packages.
# The monai pkg is not required by this class, instead by the included operators.
class AISpleenSegApp(Application):
def __init__(self, *args, **kwargs):
"""Creates an application instance."""
self._logger = logging.getLogger("{}.{}".format(__name__, type(self).__name__))
super().__init__(*args, **kwargs)

def run(self, *args, **kwargs):
# This method calls the base class to run. Can be omitted if simply calling through.
self._logger.info(f"Begin {self.run.__name__}")
super().run(*args, **kwargs)
self._logger.info(f"End {self.run.__name__}")

def compose(self):
"""Creates the app specific operators and chain them up in the processing DAG."""

logging.info(f"Begin {self.compose.__name__}")

# Create the custom operator(s) as well as SDK built-in operator(s).
study_loader_op = DICOMDataLoaderOperator()
series_selector_op = DICOMSeriesSelectorOperator(Sample_Rules_Text)
series_to_vol_op = DICOMSeriesToVolumeOperator()

# Create the inference operator that supports MONAI Bundle and automates the inference.
# The IOMapping labels match the input and prediction keys in the pre and post processing.
# The model_name is optional when the app has only one model.
# The bundle_path argument optionally can be set to an accessible bundle file path in the dev
# environment, so when the app is packaged into a MAP, the operator can complete the bundle parsing
# during init to provide the optional packages info, parsed from the bundle, to the packager
# for it to install the packages in the MAP docker image.
# Setting output IOType to DISK only works only for leaf operators, not the case in this example.
bundle_spleen_seg_op = MonaiBundleInferenceOperator(
input_mapping=[IOMapping("image", Image, IOType.IN_MEMORY)],
output_mapping=[IOMapping("pred", Image, IOType.IN_MEMORY)],
)

# Create DICOM Seg writer with segment label name in a string list
dicom_seg_writer = DICOMSegmentationWriterOperator(seg_labels=["Spleen"])

# Create the processing pipeline, by specifying the upstream and downstream operators, and
# ensuring the output from the former matches the input of the latter, in both name and type.
self.add_flow(study_loader_op, series_selector_op, {"dicom_study_list": "dicom_study_list"})
self.add_flow(
series_selector_op, series_to_vol_op, {"study_selected_series_list": "study_selected_series_list"}
)
self.add_flow(series_to_vol_op, bundle_spleen_seg_op, {"image": "image"})
# Note below the dicom_seg_writer requires two inputs, each coming from a upstream operator.
self.add_flow(
series_selector_op, dicom_seg_writer, {"study_selected_series_list": "study_selected_series_list"}
)
self.add_flow(bundle_spleen_seg_op, dicom_seg_writer, {"pred": "seg_image"})
# Create the surface mesh STL conversion operator and add it to the app execution flow, if needed, by
# uncommenting the following couple lines.
# stl_conversion_op = STLConversionOperator(output_file="stl/spleen.stl")
# self.add_flow(bundle_spleen_seg_op, stl_conversion_op, {"pred": "image"})

logging.info(f"End {self.compose.__name__}")


# This is a sample series selection rule in JSON, simply selecting CT series.
# If the study has more than 1 CT series, then all of them will be selected.
# Please see more detail in DICOMSeriesSelectorOperator.
Sample_Rules_Text = """
{
"selections": [
{
"name": "CT Series",
"conditions": {
"StudyDescription": "(.*?)",
"Modality": "(?i)CT",
"SeriesDescription": "(.*?)"
}
}
]
}
"""

if __name__ == "__main__":
# Creates the app and test it standalone. When running is this mode, please note the following:
# -m <model file>, for model file path
# -i <DICOM folder>, for input DICOM CT series folder
# -o <output folder>, for the output folder, default $PWD/output
# e.g.
# monai-deploy exec app.py -i input -m model/model.ts
#
logging.basicConfig(level=logging.DEBUG)
app_instance = AISpleenSegApp(do_run=True)
Loading