Skip to content

Commit 09d7606

Browse files
authored
Update description in fast_training_tutorial (Project-MONAI#1509)
Fixes Project-MONAI#1404. ### Description Update description in `fast_training_tutorial` ### Checks <!--- Put an `x` in all the boxes that apply, and remove the not applicable items --> - [ ] Avoid including large-size files in the PR. - [ ] Clean up long text outputs from code cells in the notebook. - [ ] For security purposes, please check the contents and remove any sensitive info such as user names and private key. - [ ] Ensure (1) hyperlinks and markdown anchors are working (2) use relative paths for tutorial repo files (3) put figure and graphs in the `./figure` folder - [ ] Notebook runs automatically `./runner.sh -t <path to .ipynb file>` Signed-off-by: KumoLiu <[email protected]>
1 parent d9dd94e commit 09d7606

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

acceleration/fast_model_training_guide.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -281,7 +281,7 @@ train_transforms = [
281281
]
282282
dataset = CacheDataset(..., transform=train_trans)
283283
```
284-
Here we convert to PyTorch `Tensor` with `EnsureTyped` transform and move data to GPU with `ToDeviced` transform. `CacheDataset` caches the transform results until `ToDeviced`, so it is in GPU memory. Then in every epoch, the program fetches cached data from GPU memory and only execute the random transform `RandCropByPosNegLabeld` on GPU directly.
284+
Here we convert to PyTorch `Tensor` and move data to GPU with `EnsureTyped` transform. `CacheDataset` caches the transform results until `EnsureTyped`, so it is in GPU memory. Then in every epoch, the program fetches cached data from GPU memory and only execute the random transform `RandCropByPosNegLabeld` on GPU directly.
285285
GPU caching example is available at [Spleen fast training tutorial](fast_training_tutorial.ipynb).
286286

287287
## Leveraging multi-GPU distributed training

acceleration/fast_training_tutorial.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -385,7 +385,7 @@
385385
"For MONAI fast training progress, we mainly introduce the following features:\n",
386386
"1. `AMP` (auto mixed precision): AMP is an important feature released in PyTorch v1.6, NVIDIA CUDA 11 added strong support for AMP and significantly improved training speed.\n",
387387
"2. `CacheDataset`: Dataset with the cache mechanism that can load data and cache deterministic transforms' result during training.\n",
388-
"3. `ToDeviced` transform: to move data to GPU and cache with `CacheDataset`, then execute random transforms on GPU directly, avoid CPU -> GPU sync in every epoch. Please note that not all the MONAI transforms support GPU operation so far, still working in progress.\n",
388+
"3. `EnsureTyped` transform: to move data to GPU and cache with `CacheDataset`, then execute random transforms on GPU directly, avoid CPU -> GPU sync in every epoch. Please note that not all the MONAI transforms support GPU operation so far, still working in progress.\n",
389389
"4. `set_track_meta(False)`: to disable meta tracking in the random transforms to avoid unnecessary computation.\n",
390390
"5. `ThreadDataLoader`: uses multi-threads instead of multi-processing, faster than `DataLoader` in light-weight task as we already cached the results of most computation.\n",
391391
"6. `DiceCE` loss function: computes Dice loss and Cross Entropy Loss, returns the weighted sum of these two losses.\n",

0 commit comments

Comments
 (0)