-
Notifications
You must be signed in to change notification settings - Fork 52
Nuance PIN MONAI Integration Example App #328
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from 7 commits
Commits
Show all changes
44 commits
Select commit
Hold shift + click to select a range
e902820
add example of Nuance PIN integration
e3dcc8c
adding lib folder for Nuance PIN wheels
0a99b55
updates for functional container
fbe2df8
readme header
8448dae
checking note div capabilities
506418a
completing docs
b92db78
format pass
7634f20
Update examples/integrations/nuance_pin/app/spleen_seg.py
aihsani d1b2144
Update examples/integrations/nuance_pin/README.md
aihsani 061701c
Update examples/integrations/nuance_pin/README.md
aihsani c11291c
Update examples/integrations/nuance_pin/README.md
aihsani d8c5365
CPB PR comments
95a3125
adding inference operator
d2e30cc
make covid lesion segmentation operator
ada742d
updating application to detection use case
c2e942e
using proper retina net inferer
9fa3ed5
bug fix
598916f
updating selection rules for lidc dataset
0115528
setup output with domain-specific object
5e210b1
renaming app to lung nodule and creating post-inference operators
0b5810f
updating IO bindings
55cc326
name updates, and updates flow to generate GSPS
8d13e32
updates gsps
58e9e50
map boxes to original dataset
0de86da
use highdicom from dicom annotations
e5513c1
sw batch size adjustment
eaf8175
adjust gsps annotations
2c91c8e
refactoring
d1c2a85
removing
1ce48d7
adding Nuance PIN report generation
9119eb0
automatically download model into container
f83fc51
fix numpy
8164767
fix base app name
bf24dcd
formatting
b8896aa
moving to top level and updating documentation
1df573a
temporary comment
c167615
fromatting
7668570
updates to get good results on LIDC data
2f9b4e3
format fixes
dc7ccb6
update app and service name
a646ec6
unused transforms
592be4a
formatting fixes
0e91be7
formatting
15a36fb
misspell
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -137,3 +137,6 @@ output | |
# Sphinx temporary files | ||
docs/notebooks | ||
_autosummary | ||
|
||
# model files | ||
*.ts |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,4 @@ | ||
Dockerfile* | ||
docker-compose.yml | ||
README.md | ||
README* |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,59 @@ | ||
FROM nvcr.io/nvidia/pytorch:21.07-py3 AS application | ||
|
||
ARG PARTNER_NAME | ||
ARG SERVICE_NAME | ||
ARG VERSION | ||
ARG MONAI_APP_MODULE | ||
ARG MODEL_PATH | ||
ARG EXTRA_PYTHON_PACKAGES | ||
|
||
ENV TZ=Etc/UTC | ||
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone | ||
|
||
# python3-gdcm or python-gdcm is required for decompression | ||
RUN apt-get -y update && \ | ||
apt-get -y install --no-install-recommends python3-distutils python3-gdcm && \ | ||
# apt-get -y install python3.7 && \ | ||
apt-get autoclean && \ | ||
apt-get clean && \ | ||
rm -rf /var/lib/apt/lists/* | ||
|
||
ENV DEBUG=YES | ||
ENV KEEP_FILES=YES | ||
|
||
# make sure all messages reach the console | ||
ENV PYTHONUNBUFFERED=1 | ||
|
||
# copy and activate virtualenv | ||
# ENV VIRTUAL_ENV=/app/venv | ||
# COPY --from=foundation /opt/venv "${VIRTUAL_ENV}" | ||
# RUN . /opt/env/bin/activate | ||
# ENV PATH="${VIRTUAL_ENV}/bin:${PATH}" | ||
|
||
# copy MONAI app files | ||
COPY . /app/. | ||
WORKDIR /app | ||
|
||
# non-root aiserviceuser in group aiserviceuser with UserID and GroupID as 20225 | ||
RUN groupadd -g 20225 -r aiserviceuser && useradd -u 20225 -r -g aiserviceuser aiserviceuser && chown -R aiserviceuser:aiserviceuser /app && \ | ||
chown -R aiserviceuser:aiserviceuser /var | ||
USER aiserviceuser:aiserviceuser | ||
|
||
ENV VIRTUAL_ENV=.venv | ||
RUN python3 -m venv $VIRTUAL_ENV | ||
ENV PATH="$VIRTUAL_ENV/bin:$PATH" | ||
|
||
RUN python -m pip install --upgrade pip && \ | ||
python -m pip install --upgrade --no-cache-dir ${EXTRA_PYTHON_PACKAGES} -r requirements.txt && \ | ||
python -m pip install --upgrade --no-cache-dir lib/ai_service-*-py3-none-any.whl && \ | ||
rm -rf lib && \ | ||
rm requirements.txt | ||
|
||
ENV AI_PARTNER_NAME ${PARTNER_NAME} | ||
ENV AI_SVC_NAME ${SERVICE_NAME} | ||
ENV AI_SVC_VERSION ${VERSION} | ||
ENV AI_MODEL_PATH ${MODEL_PATH} | ||
ENV MONAI_APP_CLASSPATH ${MONAI_APP_MODULE} | ||
|
||
ENV PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python | ||
CMD ["python", "app_wrapper.py"] |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,186 @@ | ||
# Running MONAI Apps in Nuance PIN | ||
|
||
MONAI Deploy Apps can be deployed as Nuance PIN applications with minimal effort and near-zero coding. | ||
|
||
This folder includes an example MONAI app, AI-based Spleen Segmentation, which is wrapped in the Nuance PIN API. | ||
The Nuance PIN wrapper code allows MONAI app developer in most cases to deploy their existing MONAI apps in Nuance | ||
without code changes. | ||
|
||
## Prerequisites | ||
|
||
Before setting up and running the example MONAI spleen segmentation app to run as a Nuance PIN App, the use will need to install/download the following libraries. | ||
It is optional to use a GPU for the example app, however, it is recommended that a GPU is used for inference. | ||
|
||
Minimum software requirements: | ||
- [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) | ||
- [NVIDIA Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#pre-requisites) | ||
- [Docker Compose](https://docs.docker.com/compose/install/) | ||
- [Nuance PIN SDK](https://www.nuance.com/healthcare/diagnostics-solutions/precision-imaging-network.html) | ||
|
||
> **Note**: Nuance PIN SDK does not require host installation to make the example app work. We will explore options in the [Quickstart](#quickstart) section | ||
|
||
## Quickstart | ||
|
||
This integration example already contains the AI Spleen segmentation code which is an exact copy of the code found under `examples/apps/ai_spleen_seg_app`. However, to make the example work properly we need to download the spleen segmentation model, and the data for local testing. | ||
|
||
If you are reading this guide on the MONAI Github repo, you will need to clone the MONAI repo and change the directory to the Nuance PIN integration path. | ||
```bash | ||
git clone https://github.com/Project-MONAI/monai-deploy-app-sdk.git | ||
cd examples/integrations/nuance_pin | ||
``` | ||
|
||
In this folder you will see the following directory structure | ||
```bash | ||
nuance_pin | ||
├── app # directory with MONAI app code | ||
├── lib # directory where we will place Nuance PIN wheels | ||
├── model # directory where we will place the model used by our MONAI app | ||
├── app_wrapper.py # Nuance PIN wrapper code | ||
├── docker-compose.yml # docker compose runtime script | ||
├── Dockerfile # container image build script | ||
├── README.md # this README | ||
└── requirements.txt # libraries required for the example integration to work | ||
``` | ||
|
||
We will place the spleen segmentation mode in the `nuance_pin/model` folder and use that as the location for the code in `app/spleen_seg.py`, however, | ||
aihsani marked this conversation as resolved.
Show resolved
Hide resolved
|
||
this is not a hard restriction. The developer may choose a location of their own within the `nuance_pin` subtree, but this change requires updating the | ||
`MODEL_PATH` variable in `docker-compose.yml`. | ||
|
||
### Downloading Data and Model for Spleen Segmentation | ||
|
||
To download the spleen model and test data you may follow the instructions in the MONAI Deploy [documentation](https://docs.monai.io/projects/monai-deploy-app-sdk/en/latest/getting_started/tutorials/03_segmentation_app.html#executing-from-shell). The steps are also summarized below: | ||
|
||
```bash | ||
# choose a download directory outside of the integration folder | ||
pushd ~/Downloads | ||
|
||
# install gdown | ||
pip install gdown | ||
|
||
# download spleen data and model | ||
gdown https://drive.google.com/uc?id=1cJq0iQh_yzYIxVElSlVa141aEmHZADJh | ||
|
||
# After downloading ai_spleen_bundle_data.zip from the web browser or using gdown, | ||
unzip -o ai_spleen_bundle_data.zip | ||
|
||
popd | ||
|
||
# move the spleen model from the download directory to the integration folder model directory | ||
mv ~/Downloads/model.ts model/. | ||
``` | ||
|
||
Next we must place the Nuance PIN `ai_service` wheel in the `nuance_pin/lib` folder. This would have been obtained | ||
in step 3 of of the [prerequisites](#prerequisites). | ||
|
||
### Running the Example App in the Container | ||
|
||
Now we are ready to build and start the container that runs our MONAI app as a Nuance service. | ||
```bash | ||
docker-compose up --build | ||
``` | ||
|
||
If the build is successful the a service will start on `localhost:5000`. We can verify the service is running | ||
by issuing a "live" request such as | ||
```bash | ||
curl -v http://localhost:5000/aiservice/2/live && echo "" | ||
``` | ||
The issued command should return the developer, app, and version of the deployed example app. | ||
|
||
Now we can run the example app with the example spleen data as the payload using Nuance PIN AI Service Test | ||
(`AiSvcTest`) utility obtained with the Nuance PIN SDK. | ||
```bash | ||
# create a virtual environment and activate it | ||
python3 -m venv /opt/venv | ||
. /opt/venv/bin/activate | ||
|
||
# install AiSvcTest | ||
pip install AiSvcTest-<version>-py3-none-any.whl | ||
|
||
# create an output directory for the inference results | ||
mkdir -p ~/Downloads/dcm/out | ||
|
||
# run AiSvcTest with spleen dicom payload | ||
python -m AiSvcTest -i ~/Downloads/dcm -o ~/Downloads/dcm/out -s http://localhost:5000 -V 2 -k | ||
``` | ||
|
||
### Running the Example App on the Host | ||
|
||
Alternatively the user may choose to run the Nuance PIn service directly on the host. For this we must install the following: | ||
- Nuance PIN AI Serivce libraries | ||
aihsani marked this conversation as resolved.
Show resolved
Hide resolved
|
||
- Libraries in the `requirements.txt` | ||
|
||
```bash | ||
# create a virtual environment and activate it | ||
python3 -m venv /opt/venv | ||
. /opt/venv/bin/activate | ||
|
||
# install Nuance Ai Service | ||
pip install ai_service-<version>-py3-none-any.whl | ||
|
||
# install requirements | ||
pip install -r requirements.txt | ||
|
||
# run the service | ||
export AI_PARTNER_NAME=NVIDIA | ||
export AI_SVC_NAME=ai_spleen_seg_app | ||
export AI_SVC_VERSION=0.1.0 | ||
export AI_MODEL_PATH=model/model.ts | ||
export MONAI_APP_CLASSPATH=app.spleen_seg.AISpleenSegApp | ||
export PYTHONPATH=$PYTHONPATH:. | ||
python app_wrapper.py | ||
``` | ||
|
||
Now we can issue a "live" request to check whether the service is running | ||
```bash | ||
curl -v http://localhost:5000/aiservice/2/live && echo "" | ||
``` | ||
As we did in the last section, we can now run the example app with the example spleen data as the payload using Nuance PIN AI Service Test | ||
(`AiSvcTest`) utility obtained with the Nuance PIN SDK. | ||
```bash | ||
. /opt/venv/bin/activate | ||
|
||
# install AiSvcTest | ||
pip install AiSvcTest-<version>-py3-none-any.whl | ||
|
||
# create an output directory for the inference results | ||
mkdir -p ~/Downloads/dcm/out | ||
|
||
# run AiSvcTest with spleen dicom payload | ||
python -m AiSvcTest -i ~/Downloads/dcm -o ~/Downloads/dcm/out -s http://localhost:5000 -V 2 -k | ||
``` | ||
|
||
### Bring Your Own MONAI App | ||
|
||
This example integration may be modified to fit any existing MONAI app, however, they may be caveats. | ||
aihsani marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
Nuance PIN requires all artifacts present in the output folder to be also added into the `resultsManifest.json` output file | ||
aihsani marked this conversation as resolved.
Show resolved
Hide resolved
|
||
to consider the run successful. To see what this means in practical terms, check the `resultManifest.json` output from the | ||
example app we ran the in previous sections. You will notice an entry in `resultManifest.json` that corresponds to the DICOM | ||
SEG output generated by the underlying MONAI app | ||
```json | ||
"study": { | ||
"uid": "1.2.826.0.1.3680043.2.1125.1.67295333199898911264201812221946213", | ||
"artifacts": [], | ||
"series": [ | ||
{ | ||
"uid": "1.2.826.0.1.3680043.2.1125.1.67295333199898911264201812221946213", | ||
"artifacts": [ | ||
{ | ||
"documentType": "application/dicom", | ||
"groupCode": "default", | ||
"name": "dicom_seg-DICOMSEG.dcm", | ||
"trackingUids": [] | ||
} | ||
] | ||
} | ||
] | ||
}, | ||
``` | ||
This entry is generated by `app_wrapper.py`, which takes care of adding any DICOM present in the output folder in the `resultManifest.json` | ||
to ensure that existing MONAI apps complete successfully when deployed in Nuance. In general, however, the developer may need to tailor some | ||
of the code in `app_wrapper.py` to provide more insight to Nuance's network, such as adding findings, conclusions, etc. and generating more insight | ||
using SNOMED codes. All of this is handled within the Nuance PIN SDK libraries - for more information please consult Nuance PIN [documentation](https://www.nuance.com/healthcare/diagnostics-solutions/precision-imaging-network.html). | ||
|
||
In simpler cases, the developer will need to place their code and model under `nuance_pin`. Placing the model under `model` is optional as the model may be placed | ||
anywhere where the code under `app` can access it, however, considerations must be taken when needing to deploy the model inside a container image. The MONAI app code | ||
is placed in `app` and structured as a small Python project. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,10 @@ | ||
# Copyright 2021-2022 MONAI Consortium | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,114 @@ | ||
# Copyright 2021-2022 MONAI Consortium | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
|
||
import logging | ||
|
||
from monai.deploy.core import Application, resource | ||
from monai.deploy.core.domain import Image | ||
from monai.deploy.core.io_type import IOType | ||
from monai.deploy.operators.dicom_data_loader_operator import DICOMDataLoaderOperator | ||
from monai.deploy.operators.dicom_seg_writer_operator import DICOMSegmentationWriterOperator | ||
from monai.deploy.operators.dicom_series_selector_operator import DICOMSeriesSelectorOperator | ||
from monai.deploy.operators.dicom_series_to_volume_operator import DICOMSeriesToVolumeOperator | ||
from monai.deploy.operators.monai_bundle_inference_operator import IOMapping, MonaiBundleInferenceOperator | ||
|
||
# from monai.deploy.operators.stl_conversion_operator import STLConversionOperator # import as needed. | ||
|
||
|
||
@resource(cpu=1, gpu=1, memory="7Gi") | ||
# pip_packages can be a string that is a path(str) to requirements.txt file or a list of packages. | ||
aihsani marked this conversation as resolved.
Show resolved
Hide resolved
|
||
# The monai pkg is not required by this class, instead by the included operators. | ||
class AISpleenSegApp(Application): | ||
def __init__(self, *args, **kwargs): | ||
"""Creates an application instance.""" | ||
self._logger = logging.getLogger("{}.{}".format(__name__, type(self).__name__)) | ||
super().__init__(*args, **kwargs) | ||
|
||
def run(self, *args, **kwargs): | ||
# This method calls the base class to run. Can be omitted if simply calling through. | ||
self._logger.info(f"Begin {self.run.__name__}") | ||
super().run(*args, **kwargs) | ||
self._logger.info(f"End {self.run.__name__}") | ||
|
||
def compose(self): | ||
"""Creates the app specific operators and chain them up in the processing DAG.""" | ||
|
||
logging.info(f"Begin {self.compose.__name__}") | ||
|
||
# Create the custom operator(s) as well as SDK built-in operator(s). | ||
study_loader_op = DICOMDataLoaderOperator() | ||
series_selector_op = DICOMSeriesSelectorOperator(Sample_Rules_Text) | ||
series_to_vol_op = DICOMSeriesToVolumeOperator() | ||
|
||
# Create the inference operator that supports MONAI Bundle and automates the inference. | ||
# The IOMapping labels match the input and prediction keys in the pre and post processing. | ||
# The model_name is optional when the app has only one model. | ||
# The bundle_path argument optionally can be set to an accessible bundle file path in the dev | ||
# environment, so when the app is packaged into a MAP, the operator can complete the bundle parsing | ||
# during init to provide the optional packages info, parsed from the bundle, to the packager | ||
# for it to install the packages in the MAP docker image. | ||
# Setting output IOType to DISK only works only for leaf operators, not the case in this example. | ||
bundle_spleen_seg_op = MonaiBundleInferenceOperator( | ||
input_mapping=[IOMapping("image", Image, IOType.IN_MEMORY)], | ||
output_mapping=[IOMapping("pred", Image, IOType.IN_MEMORY)], | ||
) | ||
|
||
# Create DICOM Seg writer with segment label name in a string list | ||
dicom_seg_writer = DICOMSegmentationWriterOperator(seg_labels=["Spleen"]) | ||
|
||
# Create the processing pipeline, by specifying the upstream and downstream operators, and | ||
# ensuring the output from the former matches the input of the latter, in both name and type. | ||
self.add_flow(study_loader_op, series_selector_op, {"dicom_study_list": "dicom_study_list"}) | ||
self.add_flow( | ||
series_selector_op, series_to_vol_op, {"study_selected_series_list": "study_selected_series_list"} | ||
) | ||
self.add_flow(series_to_vol_op, bundle_spleen_seg_op, {"image": "image"}) | ||
# Note below the dicom_seg_writer requires two inputs, each coming from a upstream operator. | ||
self.add_flow( | ||
series_selector_op, dicom_seg_writer, {"study_selected_series_list": "study_selected_series_list"} | ||
) | ||
self.add_flow(bundle_spleen_seg_op, dicom_seg_writer, {"pred": "seg_image"}) | ||
# Create the surface mesh STL conversion operator and add it to the app execution flow, if needed, by | ||
# uncommenting the following couple lines. | ||
# stl_conversion_op = STLConversionOperator(output_file="stl/spleen.stl") | ||
# self.add_flow(bundle_spleen_seg_op, stl_conversion_op, {"pred": "image"}) | ||
|
||
logging.info(f"End {self.compose.__name__}") | ||
|
||
|
||
# This is a sample series selection rule in JSON, simply selecting CT series. | ||
# If the study has more than 1 CT series, then all of them will be selected. | ||
# Please see more detail in DICOMSeriesSelectorOperator. | ||
Sample_Rules_Text = """ | ||
{ | ||
"selections": [ | ||
{ | ||
"name": "CT Series", | ||
"conditions": { | ||
"StudyDescription": "(.*?)", | ||
"Modality": "(?i)CT", | ||
"SeriesDescription": "(.*?)" | ||
} | ||
} | ||
] | ||
} | ||
""" | ||
|
||
if __name__ == "__main__": | ||
# Creates the app and test it standalone. When running is this mode, please note the following: | ||
# -m <model file>, for model file path | ||
# -i <DICOM folder>, for input DICOM CT series folder | ||
# -o <output folder>, for the output folder, default $PWD/output | ||
# e.g. | ||
# monai-deploy exec app.py -i input -m model/model.ts | ||
# | ||
logging.basicConfig(level=logging.DEBUG) | ||
app_instance = AISpleenSegApp(do_run=True) |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.