You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* [DLMED] add fast_training tutorial
Signed-off-by: Nic Ma <[email protected]>
* [DLMED] update README
* [DLMED] complete fast training tutorial
Signed-off-by: Nic Ma <[email protected]>
* [DLMED] update according to comments
Signed-off-by: Nic Ma <[email protected]>
Copy file name to clipboardExpand all lines: README.md
+5Lines changed: 5 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -28,6 +28,9 @@ And compares the training speed and memory usage with/without AMP.
28
28
This tutorial shows how to construct a training workflow of multi-labels segmentation task based on [MSD Brain Tumor dataset](http://medicaldecathlon.com).
This notebook compares the performance of `Dataset`, `CacheDataset` and `PersistentDataset`. These classes differ in how data is stored (in memory or on disk), and at which moment transforms are applied.
This tutorial compares the training performance of pure PyTorch program and optimized program in MONAI based on NVIDIA GPU device and latest CUDA library.
33
+
The optimization methods mainly include: `AMP`, `CacheDataset` and `Novograd`.
0 commit comments