Skip to content

Commit dab8fb2

Browse files
committed
update repo
Signed-off-by: dongyang0122 <[email protected]>
1 parent e56c416 commit dab8fb2

File tree

3 files changed

+15
-6
lines changed

3 files changed

+15
-6
lines changed

acceleration/automatic_mixed_precision.ipynb

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,14 +25,17 @@
2525
"This tutorial shows how to apply the automatic mixed precision (AMP) feature of PyTorch to training and validation programs. \n",
2626
"It's modified from the Spleen 3D segmentation tutorial notebook, and compares the training speed and memory usage with/without AMP.\n",
2727
"\n",
28-
"The Spleen dataset can be downloaded from http://medicaldecathlon.com/.\n"
28+
"The Spleen dataset can be downloaded from http://medicaldecathlon.com/.\n",
29+
"\n",
30+
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/main/acceleration/automatic_mixed_precision.ipynb)"
2931
]
3032
},
3133
{
34+
"attachments": {},
3235
"cell_type": "markdown",
3336
"metadata": {},
3437
"source": [
35-
"## Check environment"
38+
"## Setup environment"
3639
]
3740
},
3841
{

acceleration/dataset_type_performance.ipynb

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,14 +26,17 @@
2626
"\n",
2727
"`PersistentDataset` processes original data sources through the non-random transforms on first use, and stores these intermediate tensor values to an on-disk persistence representation. The intermediate processed tensors are loaded from disk on each use for processing by the random-transforms for each analysis request. The `PersistentDataset` has a similar memory footprint to the simple `Dataset`, with performance characteristics close to the `CacheDataset` at the expense of disk storage. Additionally, the cost of first time processing of data is distributed across each first use.\n",
2828
"\n",
29-
"It's modified from the [Spleen 3D segmentation tutorial notebook](../3d_segmentation/spleen_segmentation_3d.ipynb).\n"
29+
"It's modified from the [Spleen 3D segmentation tutorial notebook](../3d_segmentation/spleen_segmentation_3d.ipynb).\n",
30+
"\n",
31+
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/main/acceleration/dataset_type_performance.ipynb)"
3032
]
3133
},
3234
{
35+
"attachments": {},
3336
"cell_type": "markdown",
3437
"metadata": {},
3538
"source": [
36-
"## Check environment"
39+
"## Setup environment"
3740
]
3841
},
3942
{

acceleration/fast_training_tutorial.ipynb

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,14 +34,17 @@
3434
"\n",
3535
"With a A100 GPU and the target validation `mean dice = 0.94` of the `forground` channel only, it's more than `150x` speedup compared with the Pytorch regular implementation when achieving the same metric. And every epoch is more than `50x` faster than regular training.\n",
3636
"\n",
37-
"It's modified from the Spleen 3D segmentation tutorial notebook, the Spleen dataset can be downloaded from http://medicaldecathlon.com/.\n"
37+
"It's modified from the Spleen 3D segmentation tutorial notebook, the Spleen dataset can be downloaded from http://medicaldecathlon.com/.\n",
38+
"\n",
39+
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/main/acceleration/fast_training_tutorial.ipynb)(* please note that the free GPU resource in Colab may be not as powerful as the A100 test results in this notebook: it may not support AMP and the GPU computation of transforms may be not faster than the CPU computation.)"
3840
]
3941
},
4042
{
43+
"attachments": {},
4144
"cell_type": "markdown",
4245
"metadata": {},
4346
"source": [
44-
"## Setup and check environment"
47+
"## Setup environment"
4548
]
4649
},
4750
{

0 commit comments

Comments
 (0)