You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+7Lines changed: 7 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -291,3 +291,10 @@ This tutorial shows several visualization approaches for 3D image during transfo
291
291
292
292
#### [Auto3DSeg](./auto3dseg/)
293
293
This folder shows how to run the comprehensive Auto3DSeg pipeline with minimal inputs and customize the Auto3Dseg modules to meet different user requirements.
This tutorial shows how to construct a training workflow of self-supervised learning where unlabeled data is utilized. The tutorial shows how to train a model on TCIA dataset of unlabeled Covid-19 cases.
This tutorial shows how to utilize pre-trained weights from the self-supervised learning framework where unlabeled data is utilized. This tutorial shows how to train a model of multi-class 3D segmentation using pretrained weights.
from MONAI. The original ViT was modified by attachment of two 3D Convolutional Transpose Layers to achieve a similar
48
+
from MONAI. The original ViT was modified by the attachment of two 3D Convolutional Transpose Layers to achieve a similar
54
49
reconstruction size as that of the input image. The ViT is the backbone for the UNETR [2] network architecture which
55
-
was used for the fine-tuning fully supervised tasks.
50
+
was used for fine-tuning fully supervised tasks.
56
51
57
-
The pre-trained backbone of ViT weights were loaded to UNETR and the decoder head still relies on random initialization
58
-
for adaptability of the new downstream task. This flexibility also allows the user to adapt the ViT backbone to their
59
-
own custom created network architectures as well.
52
+
The pre-trained backbone of ViT weights was loaded to UNETR and the decoder head still relies on random initialization
53
+
for adaptability of the new downstream task. This flexibility also allows the user to adapt the ViT backbone to their custom-created network architectures as well.
60
54
61
55
References:
62
56
@@ -76,7 +70,7 @@ volume. Two augmented views of the same 3D patch are generated for the contrasti
76
70
the two augmented views closer to each other if the views are generated from the same patch, if not then it tries to
77
71
maximize the disagreement. The CL offers this functionality on a mini-batch.
Multiple axial slices of a 96x96x96 patch are shown before the augmentation (Ref Original Patch in the above figure).
94
-
Augmented View 1 & 2 are different augmentations generated via the transforms on the same cubic patch. The objective
95
-
of the SSL network is to reconstruct the original top row image from the first view. The contrastive loss
96
-
is driven by maximizing agreement of the reconstruction based on input of the two augmented views.
97
-
`matshow3d` from `monai.visualize` was used for creating this figure, a tutorial for using can be found [here](https://github.com/Project-MONAI/tutorials/blob/main/modules/transform_visualization.ipynb)
88
+
Augmented Views 1 & 2 are different augmentations generated via the transforms on the same cubic patch. The objective
89
+
of the SSL network is to reconstruct the original top-row image from the first view. The contrastive loss
90
+
is driven by maximizing the agreement of the reconstruction based on the input of the two augmented views.
0 commit comments