Skip to content

ROI Inference pipeline for HoVerNet #1055

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 33 commits into from
Dec 8, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
979e05a
Add a draft of inference
bhashemian Nov 10, 2022
d3b77de
Uncomment load weights
bhashemian Nov 14, 2022
fc271ef
Add infer_roi
bhashemian Nov 15, 2022
6582442
Major updates
bhashemian Nov 17, 2022
d9d19b5
Update the pipeline
bhashemian Nov 21, 2022
4d6bd79
Merge branch 'main' into wsi-inference-hovernet
bhashemian Nov 21, 2022
0cdac25
keep rio inference only
bhashemian Nov 21, 2022
e58050c
Remove test-oly lines
bhashemian Nov 21, 2022
123b721
Add sw batch size
bhashemian Nov 21, 2022
04c1f45
Change settings:
bhashemian Nov 21, 2022
6b61c77
Address comments
bhashemian Nov 22, 2022
a534937
Merge branch 'main' into wsi-inference-hovernet
bhashemian Nov 22, 2022
ea99823
clean up
bhashemian Nov 22, 2022
e79f5f8
fix a typo
bhashemian Nov 22, 2022
0f0884f
change logic of few args
bhashemian Nov 22, 2022
b1dac05
Add multi-gpu
bhashemian Nov 22, 2022
ccc62ed
Add/remove prints
bhashemian Nov 22, 2022
a10f7a8
Rename to inference
bhashemian Nov 22, 2022
ec9d8f1
Add output class
bhashemian Nov 22, 2022
73400f2
Update to FalttenSubKeysd
bhashemian Nov 23, 2022
eb23e7d
Merge branch 'main' of https://github.com/Project-MONAI/tutorials int…
bhashemian Nov 30, 2022
aa4a770
Update with the new hovernet postprocessing
bhashemian Dec 1, 2022
5eb776f
Merge branch 'main' into wsi-inference-hovernet
bhashemian Dec 1, 2022
a4aa476
change to nuclear type
bhashemian Dec 2, 2022
4deb7c5
to png
bhashemian Dec 5, 2022
16ad09e
remove test transform
bhashemian Dec 5, 2022
f24b2c2
Merge branch 'main' into wsi-inference-hovernet
bhashemian Dec 8, 2022
7b06bb5
Merge branch 'main' into wsi-inference-hovernet
bhashemian Dec 8, 2022
9c4a2bf
Some updates
bhashemian Dec 8, 2022
d03067a
Remove device
bhashemian Dec 8, 2022
0450963
few improvements
bhashemian Dec 8, 2022
56d2e98
improvments and bug fix
bhashemian Dec 8, 2022
8df9008
Update default run
bhashemian Dec 8, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
100 changes: 63 additions & 37 deletions pathology/hovernet/README.MD
Original file line number Diff line number Diff line change
Expand Up @@ -3,89 +3,115 @@
This folder contains ignite version examples to run train and validate a HoVerNet model.
It also has torch version notebooks to run training and evaluation.
<p align="center">
<img src="https://ars.els-cdn.com/content/image/1-s2.0-S1361841519301045-fx1_lrg.jpg" alt="hovernet scheme")
<img src="https://ars.els-cdn.com/content/image/1-s2.0-S1361841519301045-fx1_lrg.jpg" alt="HoVerNet scheme")
</p>
implementation based on:

Simon Graham et al., HoVer-Net: Simultaneous Segmentation and Classification of Nuclei in Multi-Tissue Histology Images.' Medical Image Analysis, (2019). https://arxiv.org/abs/1812.06499
Simon Graham et al., HoVer-Net: Simultaneous Segmentation and Classification of Nuclei in Multi-Tissue Histology Images.' Medical Image Analysis, (2019). <https://arxiv.org/abs/1812.06499>

### 1. Data

CoNSeP datasets which are used in the examples can be downloaded from https://warwick.ac.uk/fac/cross_fac/tia/data/hovernet/.
- First download CoNSeP dataset to `data_root`.
- Run prepare_patches.py to prepare patches from images.
CoNSeP datasets which are used in the examples can be downloaded from <https://warwick.ac.uk/fac/cross_fac/tia/data/HoVerNet/>.

- First download CoNSeP dataset to `DATA_ROOT` (default is `"/workspace/Data/Pathology/CoNSeP"`).
- Run `python prepare_patches.py` to prepare patches from images.

### 2. Questions and bugs

- For questions relating to the use of MONAI, please us our [Discussions tab](https://github.com/Project-MONAI/MONAI/discussions) on the main repository of MONAI.
- For bugs relating to MONAI functionality, please create an issue on the [main repository](https://github.com/Project-MONAI/MONAI/issues).
- For bugs relating to the running of a tutorial, please create an issue in [this repository](https://github.com/Project-MONAI/Tutorials/issues).


### 3. List of notebooks and examples

#### [Prepare Your Data](./prepare_patches.py)
This example is used to prepare patches from tiles referring to the implementation from https://github.com/vqdang/hover_net/blob/master/extract_patches.py. Prepared patches will be saved in `data_root`/Prepared.

This example is used to prepare patches from tiles referring to the implementation from <https://github.com/vqdang/hover_net/blob/master/extract_patches.py>. Prepared patches will be saved in `DATA_ROOT`/Prepared.

```bash
# Run to know all possible options
# Run to get all possible arguments
python ./prepare_patches.py -h

# Prepare patches from images
# Prepare patches from images using default arguments
python ./prepare_patches.py

# Prepare patch to use custom arguments
python ./prepare_patches.py \
--root `data_root`
--root `DATA_ROOT` \
--ps 540 540 \
--ss 164 164
```

#### [HoVerNet Training](./training.py)

This example uses MONAI workflow to train a HoVerNet model on prepared CoNSeP dataset.
Since HoVerNet is training via a two-stage approach. First initialised the model with pre-trained weights on the [ImageNet dataset](https://ieeexplore.ieee.org/document/5206848), trained only the decoders for the first 50 epochs, and then fine-tuned all layers for another 50 epochs. We need to specify `--stage` during training.
Since HoVerNet is training via a two-stage approach. First initialized the model with pre-trained weights on the [ImageNet dataset](https://ieeexplore.ieee.org/document/5206848), trained only the decoders for the first 50 epochs, and then fine-tuned all layers for another 50 epochs. We need to specify `--stage` during training.

Each user is responsible for checking the content of models/datasets and the applicable licenses and determining if suitable for the intended use.
The license for the pre-trained model used in examples is different than MONAI license. Please check the source where these weights are obtained from:
https://github.com/vqdang/hover_net#data-format
<https://github.com/vqdang/hover_net#data-format>

If you didn't use the default value in data preparation, set ``--root `DATA_ROOT`/Prepared`` for each of the training commands.

```bash
# Run to know all possible options
# Run to get all possible arguments
python ./training.py -h

# Train a hovernet model on single-gpu(replace with your own ckpt path)
# Train a HoVerNet model on single-GPU or CPU-only (replace with your own ckpt path)
export CUDA_VISIBLE_DEVICES=0; python training.py \
--ep 50 \
--stage 0 \
--ep 50 \
--bs 16 \
--root `save_root`
--log-dir ./logs
export CUDA_VISIBLE_DEVICES=0; python training.py \
--ep 50 \
--stage 1 \
--bs 4 \
--root `save_root` \
--ckpt logs/stage0/checkpoint_epoch=50.pt

# Train a hovernet model on multi-gpu (NVIDIA)(replace with your own ckpt path)
torchrun --nnodes=1 --nproc_per_node=2 training.py \
--ep 50 \
--bs 8 \
--root `save_root` \
--stage 0
torchrun --nnodes=1 --nproc_per_node=2 training.py \
--ep 50 \
--bs 2 \
--root `save_root` \
--stage 1 \
--ckpt logs/stage0/checkpoint_epoch=50.pt
--ep 50 \
--bs 16 \
--log-dir ./logs \
--ckpt logs/stage0/model.pt

# Train a HoVerNet model on multi-GPU with default arguments
torchrun --nnodes=1 --nproc_per_node=2 training.py
torchrun --nnodes=1 --nproc_per_node=2 training.py --stage 1
```

#### [HoVerNet Validation](./evaluation.py)

This example uses MONAI workflow to evaluate the trained HoVerNet model on prepared test data from CoNSeP dataset.
With their metrics on original mode. We reproduce the results with Dice: 0.82762; PQ: 0.48976; F1d: 0.73592.

```bash
# Run to know all possible options
# Run to get all possible arguments
python ./evaluation.py -h

# Evaluate a HoVerNet model
python ./evaluation.py
# Evaluate a HoVerNet model on single-GPU or CPU-only
python ./evaluation.py \
--root `save_root` \
--ckpt logs/stage0/checkpoint_epoch=50.pt
--ckpt logs/stage0/model.pt

# Evaluate a HoVerNet model on multi-GPU with default arguments
torchrun --nnodes=1 --nproc_per_node=2 evaluation.py
```

#### [HoVerNet Inference](./inference.py)

This example uses MONAI workflow to run inference for HoVerNet model on arbitrary sized region of interest.
Under the hood, it will use a sliding window approach to run inference on overlapping patches and then put the results
of the inference together and makes an output image the same size as the input. Then it will run the post-processing on
this output image and create the final results. This example save the instance map and type map as png files but it can
be modified to save any output of interest.

```bash
# Run to get all possible arguments
python ./inference.py -h

# Run HoVerNet inference on single-GPU or CPU-only
python ./inference.py \
--root `save_root` \
--ckpt logs/stage0/model.pt

# Run HoVerNet inference on multi-GPU with default arguments
torchrun --nnodes=1 --nproc_per_node=2 ./inference.py
```

## Disclaimer
Expand Down
54 changes: 30 additions & 24 deletions pathology/hovernet/evaluation.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,21 +28,18 @@


def prepare_data(data_dir, phase):
data_dir = os.path.join(data_dir, phase)
"""prepare data list"""

images = list(sorted(
glob.glob(os.path.join(data_dir, "*/*image.npy"))))
inst_maps = list(sorted(
glob.glob(os.path.join(data_dir, "*/*inst_map.npy"))))
type_maps = list(sorted(
glob.glob(os.path.join(data_dir, "*/*type_map.npy"))))
data_dir = os.path.join(data_dir, phase)
images = sorted(glob.glob(os.path.join(data_dir, "*image.npy")))
inst_maps = sorted(glob.glob(os.path.join(data_dir, "*inst_map.npy")))
type_maps = sorted(glob.glob(os.path.join(data_dir, "*type_map.npy")))

data_dicts = [
data_list = [
{"image": _image, "label_inst": _inst_map, "label_type": _type_map}
for _image, _inst_map, _type_map in zip(images, inst_maps, type_maps)
]

return data_dicts
return data_list


def run(cfg):
Expand Down Expand Up @@ -75,13 +72,10 @@ def run(cfg):
)

# Create MONAI DataLoaders
valid_data = prepare_data(cfg["root"], "valid")
valid_data = prepare_data(cfg["root"], "Test")
valid_ds = CacheDataset(data=valid_data, transform=val_transforms, cache_rate=1.0, num_workers=4)
val_loader = DataLoader(
valid_ds,
batch_size=cfg["batch_size"],
num_workers=cfg["num_workers"],
pin_memory=torch.cuda.is_available()
valid_ds, batch_size=cfg["batch_size"], num_workers=cfg["num_workers"], pin_memory=torch.cuda.is_available()
)

# initialize model
Expand All @@ -95,23 +89,31 @@ def run(cfg):
freeze_encoder=False,
).to(device)

post_process_np = Compose([
Activationsd(keys=HoVerNetBranch.NP.value, softmax=True),
Lambdad(keys=HoVerNetBranch.NP.value, func=lambda x: x[1: 2, ...] > 0.5)])
post_process_np = Compose(
[
Activationsd(keys=HoVerNetBranch.NP.value, softmax=True),
Lambdad(keys=HoVerNetBranch.NP.value, func=lambda x: x[1:2, ...] > 0.5),
]
)
post_process = Lambdad(keys="pred", func=post_process_np)

# Evaluator
val_handlers = [
CheckpointLoader(load_path=cfg["ckpt_path"], load_dict={"net": model}),
CheckpointLoader(load_path=cfg["ckpt"], load_dict={"net": model}),
StatsHandler(output_transform=lambda x: None),
]
evaluator = SupervisedEvaluator(
device=device,
val_data_loader=val_loader,
prepare_batch=PrepareBatchHoVerNet(extra_keys=['label_type', 'hover_label_inst']),
prepare_batch=PrepareBatchHoVerNet(extra_keys=["label_type", "hover_label_inst"]),
network=model,
postprocessing=post_process,
key_val_metric={"val_dice": MeanDice(include_background=False, output_transform=from_engine_hovernet(keys=["pred", "label"], nested_key=HoVerNetBranch.NP.value))},
key_val_metric={
"val_dice": MeanDice(
include_background=False,
output_transform=from_engine_hovernet(keys=["pred", "label"], nested_key=HoVerNetBranch.NP.value),
)
},
val_handlers=val_handlers,
amp=cfg["amp"],
)
Expand All @@ -125,18 +127,22 @@ def main():
parser.add_argument(
"--root",
type=str,
default="/workspace/Data/CoNSeP/Prepared/consep",
default="/workspace/Data/Pathology/CoNSeP/Prepared",
help="root data dir",
)

parser.add_argument(
"--ckpt",
type=str,
default="./logs/model.pt",
help="Path to the pytorch checkpoint",
)
parser.add_argument("--bs", type=int, default=16, dest="batch_size", help="batch size")
parser.add_argument("--no-amp", action="store_false", dest="amp", help="deactivate amp")
parser.add_argument("--classes", type=int, default=5, dest="out_classes", help="output classes")
parser.add_argument("--mode", type=str, default="original", help="choose either `original` or `fast`")

parser.add_argument("--cpu", type=int, default=8, dest="num_workers", help="number of workers")
parser.add_argument("--use_gpu", type=bool, default=True, dest="use_gpu", help="whether to use gpu")
parser.add_argument("--ckpt", type=str, dest="ckpt_path", help="checkpoint path")

args = parser.parse_args()
cfg = vars(args)
Expand Down
Loading