|
| 1 | +# Running MONAI Apps in Nuance PIN |
| 2 | + |
| 3 | +MONAI Deploy Apps can be deployed as Nuance PIN applications with minimal effort and near-zero coding. |
| 4 | + |
| 5 | +This folder includes an example MONAI app, AI-based Lung Nodule Segmentation, which is wrapped by the Nuance PIN API. |
| 6 | +The Nuance PIN wrapper code allows MONAI app developer to deploy their existing MONAI apps in Nuance |
| 7 | +with minimal code changes. |
| 8 | + |
| 9 | +## Prerequisites |
| 10 | + |
| 11 | +Before setting up and running the example MONAI spleen segmentation app to run as a Nuance PIN App, the user will need to install/download the following libraries. |
| 12 | +It is optional to use a GPU for the example app, however, it is recommended that a GPU is used for inference as it is very computationally intensive. |
| 13 | + |
| 14 | +Minimum software requirements: |
| 15 | +- [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) |
| 16 | +- [NVIDIA Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#pre-requisites) |
| 17 | +- [Docker Compose](https://docs.docker.com/compose/install/) |
| 18 | +- [Nuance PIN SDK](https://www.nuance.com/healthcare/diagnostics-solutions/precision-imaging-network.html) |
| 19 | + |
| 20 | +> **Note**: Nuance PIN SDK does not require host installation to make the example app work. We will explore options in the [Quickstart](#quickstart) section |
| 21 | +
|
| 22 | +## Quickstart |
| 23 | + |
| 24 | +If you are reading this guide on the MONAI Github repo, you will need to clone the MONAI repo and change the directory to the Nuance PIN integration path. |
| 25 | +```bash |
| 26 | +git clone https://github.com/Project-MONAI/monai-deploy-app-sdk.git |
| 27 | +cd integrations/nuance_pin |
| 28 | +``` |
| 29 | + |
| 30 | +In this folder you will see the following directory structure |
| 31 | +```bash |
| 32 | +nuance_pin |
| 33 | + ├── app # directory with MONAI app code |
| 34 | + ├── lib # directory where we will place Nuance PIN wheels |
| 35 | + ├── model # directory where we will place the model used by our MONAI app |
| 36 | + ├── app_wrapper.py # Nuance PIN wrapper code |
| 37 | + ├── docker-compose.yml # docker compose runtime script |
| 38 | + ├── Dockerfile # container image build script |
| 39 | + ├── README.md # this README |
| 40 | + └── requirements.txt # libraries required for the example integration to work |
| 41 | +``` |
| 42 | + |
| 43 | +When building the base container image the [Lung Nodule Detection model](https://github.com/Project-MONAI/model-zoo/releases/download/hosting_storage_v1/lung_nodule_ct_detection_v0.2.0.zip), available from the [MONAI model zoo](https://github.com/Project-MONAI/model-zoo/releases/tag/hosting_storage_v1) will automatically download in the `model` folder. Should the developer choose a different location of their own within the `nuance_pin` subtree for their model, then an update to `MODEL_PATH` variable in `docker-compose.yml` is required. |
| 44 | + |
| 45 | +### Downloading Data for Spleen Segmentation |
| 46 | + |
| 47 | +To download the test data you may follow the instructions in the [Lund Nodule Detection Documentation](https://github.com/Project-MONAI/model-zoo/tree/dev/models/lung_nodule_ct_detection#data). |
| 48 | + |
| 49 | +### Download Nuance PIN SDK |
| 50 | + |
| 51 | +Place the Nuance PIN `ai_service` wheel in the `nuance_pin/lib` folder. This can be obtained in the link provided in step 3 of of the [prerequisites](#prerequisites). |
| 52 | + |
| 53 | +### Running the Example App in the Container |
| 54 | + |
| 55 | +Now we are ready to build and start the container that runs our MONAI app as a Nuance service. |
| 56 | +```bash |
| 57 | +docker-compose up --build |
| 58 | +``` |
| 59 | + |
| 60 | +If the build is successful the a service will start on `localhost:5000`. We can verify the service is running |
| 61 | +by issuing a "live" request such as |
| 62 | +```bash |
| 63 | +curl -v http://localhost:5000/aiservice/2/live && echo "" |
| 64 | +``` |
| 65 | +The issued command should return the developer, app, and version of the deployed example app. |
| 66 | + |
| 67 | +Now we can run the example app with the example spleen data as the payload using Nuance PIN AI Service Test |
| 68 | +(`AiSvcTest`) utility obtained with the Nuance PIN SDK. |
| 69 | +```bash |
| 70 | +# create a virtual environment and activate it |
| 71 | +python3 -m venv /opt/venv |
| 72 | +. /opt/venv/bin/activate |
| 73 | + |
| 74 | +# install AiSvcTest |
| 75 | +pip install AiSvcTest-<version>-py3-none-any.whl |
| 76 | + |
| 77 | +# create an output directory for the inference results |
| 78 | +mkdir -p ~/Downloads/dcm/out |
| 79 | + |
| 80 | +# run AiSvcTest with spleen dicom payload |
| 81 | +python -m AiSvcTest -i ~/Downloads/dcm -o ~/Downloads/dcm/out -s http://localhost:5000 -V 2 -k |
| 82 | +``` |
| 83 | + |
| 84 | +### Running the Example App on the Host |
| 85 | + |
| 86 | +Alternatively the user may choose to run the Nuance PIn service directly on the host. For this we must install the following: |
| 87 | +- Nuance PIN AI Service libraries |
| 88 | +- Libraries in the `requirements.txt` |
| 89 | + |
| 90 | +```bash |
| 91 | +# create a virtual environment and activate it |
| 92 | +python3 -m venv /opt/venv |
| 93 | +. /opt/venv/bin/activate |
| 94 | + |
| 95 | +# install Nuance Ai Service |
| 96 | +pip install ai_service-<version>-py3-none-any.whl |
| 97 | + |
| 98 | +# install requirements |
| 99 | +pip install -r requirements.txt |
| 100 | + |
| 101 | +# download the lung nodule detection model |
| 102 | +wget -q https://github.com/Project-MONAI/model-zoo/releases/download/hosting_storage_v1/lung_nodule_ct_detection_v0.2.0.zip && \ |
| 103 | +unzip lung_nodule_ct_detection_v0.2.0.zip -d /tmp/ && \ |
| 104 | +cp /tmp/lung_nodule_ct_detection/models/model.ts model/. && \ |
| 105 | +rm -rf /tmp/lung_nodule_ct_detection && \ |
| 106 | +rm lung_nodule_ct_detection_v0.2.0.zip |
| 107 | + |
| 108 | +# run the service |
| 109 | +export AI_PARTNER_NAME=NVIDIA |
| 110 | +export AI_SVC_NAME=ai_spleen_seg_app |
| 111 | +export AI_SVC_VERSION=0.1.0 |
| 112 | +export AI_MODEL_PATH=model/model.ts |
| 113 | +export MONAI_APP_CLASSPATH=app.spleen_seg.AISpleenSegApp |
| 114 | +export PYTHONPATH=$PYTHONPATH:. |
| 115 | +python app_wrapper.py |
| 116 | +``` |
| 117 | + |
| 118 | +Now we can issue a "live" request to check whether the service is running |
| 119 | +```bash |
| 120 | +curl -v http://localhost:5000/aiservice/2/live && echo "" |
| 121 | +``` |
| 122 | +As we did in the last section, we can now run the example app with the example spleen data as the payload using Nuance PIN AI Service Test |
| 123 | +(`AiSvcTest`) utility obtained with the Nuance PIN SDK. |
| 124 | +```bash |
| 125 | +. /opt/venv/bin/activate |
| 126 | + |
| 127 | +# install AiSvcTest |
| 128 | +pip install AiSvcTest-<version>-py3-none-any.whl |
| 129 | + |
| 130 | +# create an output directory for the inference results |
| 131 | +mkdir -p ~/Downloads/dcm/out |
| 132 | + |
| 133 | +# run AiSvcTest with spleen dicom payload |
| 134 | +python -m AiSvcTest -i ~/Downloads/dcm -o ~/Downloads/dcm/out -s http://localhost:5000 -V 2 -k |
| 135 | +``` |
| 136 | + |
| 137 | +### Bring Your Own MONAI App |
| 138 | + |
| 139 | +This example integration may be modified to fit any existing MONAI app, however, there may be caveats. |
| 140 | + |
| 141 | +Nuance PIN requires all artifacts present in the output folder to be also added into the `resultManifest.json` output file |
| 142 | +to consider the run successful. To see what this means in practical terms, check the `resultManifest.json` output from the |
| 143 | +example app we ran the in previous sections. You will notice an entry in `resultManifest.json` that corresponds to the DICOM |
| 144 | +SEG output generated by the underlying MONAI app |
| 145 | +```json |
| 146 | + "study": { |
| 147 | + "uid": "1.2.826.0.1.3680043.2.1125.1.67295333199898911264201812221946213", |
| 148 | + "artifacts": [], |
| 149 | + "series": [ |
| 150 | + { |
| 151 | + "uid": "1.2.826.0.1.3680043.2.1125.1.67295333199898911264201812221946213", |
| 152 | + "artifacts": [ |
| 153 | + { |
| 154 | + "documentType": "application/dicom", |
| 155 | + "groupCode": "default", |
| 156 | + "name": "dicom_seg-DICOMSEG.dcm", |
| 157 | + "trackingUids": [] |
| 158 | + } |
| 159 | + ] |
| 160 | + } |
| 161 | + ] |
| 162 | + }, |
| 163 | +``` |
| 164 | +This entry is generated by `app_wrapper.py`, which takes care of adding any DICOM present in the output folder in the `resultManifest.json` |
| 165 | +to ensure that existing MONAI apps complete successfully when deployed in Nuance. In general, however, the developer may need to tailor some |
| 166 | +of the code in `app_wrapper.py` to provide more insight to Nuance's network, such as adding findings, conclusions, etc. and generating more insight |
| 167 | +using SNOMED codes. All of this is handled within the Nuance PIN SDK libraries - for more information please consult Nuance PIN [documentation](https://www.nuance.com/healthcare/diagnostics-solutions/precision-imaging-network.html). |
| 168 | + |
| 169 | +In simpler cases, the developer will need to place their code and model under `nuance_pin`. Placing the model under `model` is optional as the model may be placed |
| 170 | +anywhere where the code under `app` can access it, however, considerations must be taken when needing to deploy the model inside a container image. The MONAI app code |
| 171 | +is placed in `app` and structured as a small Python project. |
0 commit comments