Skip to content

Commit 9ac03b2

Browse files
committed
Merge branch 'MetaTensor_Spacing' into MetaTensor
2 parents 0dc98b4 + e1967d6 commit 9ac03b2

24 files changed

+142
-151
lines changed

.github/workflows/pep8.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ on:
44
# quick tests for every pull request
55
push:
66
branches:
7-
- master
7+
- main
88
pull_request:
99

1010
jobs:

3d_segmentation/brats_segmentation_3d.ipynb

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -267,13 +267,13 @@
267267
" LoadImaged(keys=[\"image\", \"label\"]),\n",
268268
" EnsureChannelFirstd(keys=\"image\"),\n",
269269
" Orientationd(keys=[\"image\", \"label\"], axcodes=\"RAS\"),\n",
270-
" FromMetaTensord(keys=[\"image\", \"label\"]),\n",
271-
" ConvertToMultiChannelBasedOnBratsClassesd(keys=\"label\"),\n",
272270
" Spacingd(\n",
273271
" keys=[\"image\", \"label\"],\n",
274272
" pixdim=(1.0, 1.0, 1.0),\n",
275273
" mode=(\"bilinear\", \"nearest\"),\n",
276274
" ),\n",
275+
" FromMetaTensord(keys=[\"image\", \"label\"]),\n",
276+
" ConvertToMultiChannelBasedOnBratsClassesd(keys=\"label\"),\n",
277277
" RandSpatialCropd(keys=[\"image\", \"label\"], roi_size=[224, 224, 144], random_size=False),\n",
278278
" RandFlipd(keys=[\"image\", \"label\"], prob=0.5, spatial_axis=0),\n",
279279
" RandFlipd(keys=[\"image\", \"label\"], prob=0.5, spatial_axis=1),\n",
@@ -289,13 +289,13 @@
289289
" LoadImaged(keys=[\"image\", \"label\"]),\n",
290290
" EnsureChannelFirstd(keys=\"image\"),\n",
291291
" Orientationd(keys=[\"image\", \"label\"], axcodes=\"RAS\"),\n",
292-
" FromMetaTensord(keys=[\"image\", \"label\"]),\n",
293-
" ConvertToMultiChannelBasedOnBratsClassesd(keys=\"label\"),\n",
294292
" Spacingd(\n",
295293
" keys=[\"image\", \"label\"],\n",
296294
" pixdim=(1.0, 1.0, 1.0),\n",
297295
" mode=(\"bilinear\", \"nearest\"),\n",
298296
" ),\n",
297+
" FromMetaTensord(keys=[\"image\", \"label\"]),\n",
298+
" ConvertToMultiChannelBasedOnBratsClassesd(keys=\"label\"),\n",
299299
" NormalizeIntensityd(keys=\"image\", nonzero=True, channel_wise=True),\n",
300300
" EnsureTyped(keys=[\"image\", \"label\"]),\n",
301301
" ]\n",
@@ -788,9 +788,9 @@
788788
" LoadImaged(keys=[\"image\", \"label\"]),\n",
789789
" EnsureChannelFirstd(keys=[\"image\"]),\n",
790790
" Orientationd(keys=[\"image\"], axcodes=\"RAS\"),\n",
791+
" Spacingd(keys=[\"image\"], pixdim=(1.0, 1.0, 1.0), mode=\"bilinear\"),\n",
791792
" FromMetaTensord(keys=[\"image\", \"label\"]),\n",
792793
" ConvertToMultiChannelBasedOnBratsClassesd(keys=\"label\"),\n",
793-
" Spacingd(keys=[\"image\"], pixdim=(1.0, 1.0, 1.0), mode=\"bilinear\"),\n",
794794
" NormalizeIntensityd(keys=\"image\", nonzero=True, channel_wise=True),\n",
795795
" EnsureTyped(keys=[\"image\", \"label\"]),\n",
796796
" ]\n",

3d_segmentation/challenge_baseline/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -109,9 +109,9 @@ This baseline method achieves 0.6904 ± 0.1801 Dice score on the challenge valid
109109

110110
- For MONAI technical documentation, please visit [docs.monai.io](https://docs.monai.io/).
111111
- Please visit [`Project-MONAI/tutorials`](https://github.com/Project-MONAI/tutorials) for more examples, including:
112-
- [`3D segmentation pipelines`](https://github.com/Project-MONAI/tutorials/tree/master/3d_segmentation),
112+
- [`3D segmentation pipelines`](https://github.com/Project-MONAI/tutorials/tree/main/3d_segmentation),
113113
- [`Dynamic UNet`](https://github.com/Project-MONAI/tutorials/blob/main/modules/dynunet_tutorial.ipynb),
114-
- [`Training acceleration`](https://github.com/Project-MONAI/tutorials/tree/master/acceleration).
114+
- [`Training acceleration`](https://github.com/Project-MONAI/tutorials/tree/main/acceleration).
115115

116116
## Submitting to the leaderboard
117117

3d_segmentation/spleen_segmentation_3d.ipynb

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -282,9 +282,9 @@
282282
" LoadImaged(keys=[\"image\", \"label\"]),\n",
283283
" EnsureChannelFirstd(keys=[\"image\", \"label\"]),\n",
284284
" Orientationd(keys=[\"image\", \"label\"], axcodes=\"RAS\"),\n",
285-
" FromMetaTensord(keys=[\"image\", \"label\"]),\n",
286285
" Spacingd(keys=[\"image\", \"label\"], pixdim=(\n",
287286
" 1.5, 1.5, 2.0), mode=(\"bilinear\", \"nearest\")),\n",
287+
" FromMetaTensord(keys=[\"image\", \"label\"]),\n",
288288
" ScaleIntensityRanged(\n",
289289
" keys=[\"image\"], a_min=-57, a_max=164,\n",
290290
" b_min=0.0, b_max=1.0, clip=True,\n",
@@ -315,9 +315,9 @@
315315
" LoadImaged(keys=[\"image\", \"label\"]),\n",
316316
" EnsureChannelFirstd(keys=[\"image\", \"label\"]),\n",
317317
" Orientationd(keys=[\"image\", \"label\"], axcodes=\"RAS\"),\n",
318-
" FromMetaTensord(keys=[\"image\", \"label\"]),\n",
319318
" Spacingd(keys=[\"image\", \"label\"], pixdim=(\n",
320319
" 1.5, 1.5, 2.0), mode=(\"bilinear\", \"nearest\")),\n",
320+
" FromMetaTensord(keys=[\"image\", \"label\"]),\n",
321321
" ScaleIntensityRanged(\n",
322322
" keys=[\"image\"], a_min=-57, a_max=164,\n",
323323
" b_min=0.0, b_max=1.0, clip=True,\n",
@@ -697,9 +697,9 @@
697697
" LoadImaged(keys=[\"image\", \"label\"]),\n",
698698
" EnsureChannelFirstd(keys=[\"image\", \"label\"]),\n",
699699
" Orientationd(keys=[\"image\"], axcodes=\"RAS\"),\n",
700-
" FromMetaTensord(keys=[\"image\", \"label\"]),\n",
701700
" Spacingd(keys=[\"image\"], pixdim=(\n",
702701
" 1.5, 1.5, 2.0), mode=\"bilinear\"),\n",
702+
" FromMetaTensord(keys=[\"image\", \"label\"]),\n",
703703
" ScaleIntensityRanged(\n",
704704
" keys=[\"image\"], a_min=-57, a_max=164,\n",
705705
" b_min=0.0, b_max=1.0, clip=True,\n",
@@ -792,9 +792,9 @@
792792
" LoadImaged(keys=\"image\"),\n",
793793
" EnsureChannelFirstd(keys=\"image\"),\n",
794794
" Orientationd(keys=[\"image\"], axcodes=\"RAS\"),\n",
795-
" FromMetaTensord(keys=\"image\"),\n",
796795
" Spacingd(keys=[\"image\"], pixdim=(\n",
797796
" 1.5, 1.5, 2.0), mode=\"bilinear\"),\n",
797+
" FromMetaTensord(keys=\"image\"),\n",
798798
" ScaleIntensityRanged(\n",
799799
" keys=[\"image\"], a_min=-57, a_max=164,\n",
800800
" b_min=0.0, b_max=1.0, clip=True,\n",

3d_segmentation/spleen_segmentation_3d_lightning.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -272,12 +272,12 @@
272272
" LoadImaged(keys=[\"image\", \"label\"]),\n",
273273
" AddChanneld(keys=[\"image\", \"label\"]),\n",
274274
" Orientationd(keys=[\"image\", \"label\"], axcodes=\"RAS\"),\n",
275-
" FromMetaTensord(keys=[\"image\", \"label\"]),\n",
276275
" Spacingd(\n",
277276
" keys=[\"image\", \"label\"],\n",
278277
" pixdim=(1.5, 1.5, 2.0),\n",
279278
" mode=(\"bilinear\", \"nearest\"),\n",
280279
" ),\n",
280+
" FromMetaTensord(keys=[\"image\", \"label\"]),\n",
281281
" ScaleIntensityRanged(\n",
282282
" keys=[\"image\"], a_min=-57, a_max=164,\n",
283283
" b_min=0.0, b_max=1.0, clip=True,\n",
@@ -313,12 +313,12 @@
313313
" LoadImaged(keys=[\"image\", \"label\"]),\n",
314314
" AddChanneld(keys=[\"image\", \"label\"]),\n",
315315
" Orientationd(keys=[\"image\", \"label\"], axcodes=\"RAS\"),\n",
316-
" FromMetaTensord(keys=[\"image\", \"label\"]),\n",
317316
" Spacingd(\n",
318317
" keys=[\"image\", \"label\"],\n",
319318
" pixdim=(1.5, 1.5, 2.0),\n",
320319
" mode=(\"bilinear\", \"nearest\"),\n",
321320
" ),\n",
321+
" FromMetaTensord(keys=[\"image\", \"label\"]),\n",
322322
" ScaleIntensityRanged(\n",
323323
" keys=[\"image\"], a_min=-57, a_max=164,\n",
324324
" b_min=0.0, b_max=1.0, clip=True,\n",

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -175,7 +175,7 @@ And compares the training speed and memory usage with/without AMP.
175175
This notebook compares the performance of `Dataset`, `CacheDataset` and `PersistentDataset`. These classes differ in how data is stored (in memory or on disk), and at which moment transforms are applied.
176176
#### [fast_training_tutorial](./acceleration/fast_training_tutorial.ipynb)
177177
This tutorial compares the training performance of pure PyTorch program and optimized program in MONAI based on NVIDIA GPU device and latest CUDA library.
178-
The optimization methods mainly include: `AMP`, `CacheDataset` and `Novograd`.
178+
The optimization methods mainly include: `AMP`, `CacheDataset`, `GPU transforms`, `ThreadDataLoader`, `DiceCELoss` and `SGD`.
179179
#### [multi_gpu_test](./acceleration/multi_gpu_test.ipynb)
180180
This notebook is a quick demo for devices, run the Ignite trainer engine on CPU, GPU and multiple GPUs.
181181
#### [threadbuffer_performance](./acceleration/threadbuffer_performance.ipynb)

acceleration/automatic_mixed_precision.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -239,14 +239,14 @@
239239
" train_transforms = Compose(\n",
240240
" [\n",
241241
" LoadImaged(keys=[\"image\", \"label\"]),\n",
242-
" FromMetaTensord(keys=[\"image\", \"label\"]),\n",
243242
" AddChanneld(keys=[\"image\", \"label\"]),\n",
244243
" Orientationd(keys=[\"image\", \"label\"], axcodes=\"RAS\"),\n",
245244
" Spacingd(\n",
246245
" keys=[\"image\", \"label\"],\n",
247246
" pixdim=(1.5, 1.5, 2.0),\n",
248247
" mode=(\"bilinear\", \"nearest\"),\n",
249248
" ),\n",
249+
" FromMetaTensord(keys=[\"image\", \"label\"]),\n",
250250
" ScaleIntensityRanged(\n",
251251
" keys=[\"image\"],\n",
252252
" a_min=-57,\n",
@@ -283,14 +283,14 @@
283283
" val_transforms = Compose(\n",
284284
" [\n",
285285
" LoadImaged(keys=[\"image\", \"label\"]),\n",
286-
" FromMetaTensord(keys=[\"image\", \"label\"]),\n",
287286
" AddChanneld(keys=[\"image\", \"label\"]),\n",
288287
" Orientationd(keys=[\"image\", \"label\"], axcodes=\"RAS\"),\n",
289288
" Spacingd(\n",
290289
" keys=[\"image\", \"label\"],\n",
291290
" pixdim=(1.5, 1.5, 2.0),\n",
292291
" mode=(\"bilinear\", \"nearest\"),\n",
293292
" ),\n",
293+
" FromMetaTensord(keys=[\"image\", \"label\"]),\n",
294294
" ScaleIntensityRanged(\n",
295295
" keys=[\"image\"],\n",
296296
" a_min=-57,\n",

acceleration/dataset_type_performance.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -398,14 +398,14 @@
398398
" train_transforms = Compose(\n",
399399
" [\n",
400400
" LoadImaged(keys=[\"image\", \"label\"]),\n",
401-
" FromMetaTensord(keys=[\"image\", \"label\"]),\n",
402401
" AddChanneld(keys=[\"image\", \"label\"]),\n",
403402
" Orientationd(keys=[\"image\", \"label\"], axcodes=\"RAS\"),\n",
404403
" Spacingd(\n",
405404
" keys=[\"image\", \"label\"],\n",
406405
" pixdim=(1.5, 1.5, 2.0),\n",
407406
" mode=(\"bilinear\", \"nearest\"),\n",
408407
" ),\n",
408+
" FromMetaTensord(keys=[\"image\", \"label\"]),\n",
409409
" ScaleIntensityRanged(\n",
410410
" keys=[\"image\"],\n",
411411
" a_min=-57,\n",
@@ -438,14 +438,14 @@
438438
" val_transforms = Compose(\n",
439439
" [\n",
440440
" LoadImaged(keys=[\"image\", \"label\"]),\n",
441-
" FromMetaTensord(keys=[\"image\", \"label\"]),\n",
442441
" AddChanneld(keys=[\"image\", \"label\"]),\n",
443442
" Orientationd(keys=[\"image\", \"label\"], axcodes=\"RAS\"),\n",
444443
" Spacingd(\n",
445444
" keys=[\"image\", \"label\"],\n",
446445
" pixdim=(1.5, 1.5, 2.0),\n",
447446
" mode=(\"bilinear\", \"nearest\"),\n",
448447
" ),\n",
448+
" FromMetaTensord(keys=[\"image\", \"label\"]),\n",
449449
" ScaleIntensityRanged(\n",
450450
" keys=[\"image\"],\n",
451451
" a_min=-57,\n",

acceleration/fast_model_training_guide.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -305,14 +305,13 @@ With all the above strategies, in this section, we introduce how to apply them t
305305
### 1. Spleen segmentation
306306

307307
- Select the algorithms based on the experiments.
308-
1. As a binary segmentation task, we replaced the baseline `Dice` loss with a `DiceCE` loss, it can help improve the convergence. To achieve the target metric (mean Dice = 0.95) it reduces the number of training epochs from 200 to 50.
309-
2. We tried several numerical optimizers, and finally replaced the baseline `Adam` optimizer with `Novograd`, which consistently reduce the number of training epochs from 50 to 30.
308+
As a binary segmentation task, we replaced the baseline `Dice` loss with a `DiceCE` loss, it can help improve the convergence. And we tried to analyze the training curve and tuned different parameters of the network and tested several numerical optimizers, finally replaced the baseline `Adam` optimizer with `SGD`. To achieve the target metric (`mean Dice = 0.94` of the `foreground` channel only) it reduces the number of training epochs from 280 to 60.
310309
- Optimize GPU utilization.
311310
1. With `AMP`, the training speed is significantly improved and can achieve almost the same validation metric as without `AMP`.
312311
2. The deterministic transform results of all the spleen dataset is around 8 GB, which can be cached in a V100 GPU memory. So, we cached all the data in GPU memory and executed the following transforms in GPU directly.
313312
- Replace `DataLoader` with `ThreadDataLoader`. As all the data are cached in GPU, the computation of randomized transforms is on GPU and light-weighted, `ThreadDataLoader` help avoid the IPC cost of multi-processing in `DataLoader` and increase the GPU utilization.
314313

315-
In summary, with a V100 GPU, we can achieve the training converges at a target validation mean Dice of `0.95` within one minute (`52s` on a V100 GPU, `41s` on an A100 GPU), it is approximately `200x` faster compared with the native PyTorch implementation when achieving the target metric. And each epoch is `20x` faster than the regular training.
314+
In summary, with a V100 GPU and the target validation `mean dice = 0.94` of the `forground` channel only, it's more than `100x` speedup compared with the Pytorch regular implementation when achieving the same metric (validation accuracies). And every epoch is `20x` faster than regular training.
316315
![spleen fast training](../figures/fast_training.png)
317316

318317
More details are available at [Spleen fast training tutorial](https://github.com/Project-MONAI/tutorials/blob/main/acceleration/fast_training_tutorial.ipynb).

0 commit comments

Comments
 (0)