-
Notifications
You must be signed in to change notification settings - Fork 739
Maisi readme #1743
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Maisi readme #1743
Changes from all commits
Commits
Show all changes
63 commits
Select commit
Hold shift + click to select a range
c97509f
add readme
Can-Zhao 544ee06
add readme
Can-Zhao 033e614
add readme
Can-Zhao fce7a02
Merge branch 'main' into maisi_readme
Can-Zhao d315e0a
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] 1af6f91
correct typo
Can-Zhao 0f2a0e1
Merge branch 'maisi_readme' of https://github.com/Can-Zhao/tutorials …
Can-Zhao 8959b8b
add mri training data number
Can-Zhao cdc6ae7
add more details for inference
Can-Zhao aeb301e
Merge branch 'main' into maisi_readme
guopengf 31ffdf5
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] 58f637f
test commit
guopengf 9648aef
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] 03f905a
Merge branch 'main' into maisi_readme
Can-Zhao 87305e6
Merge branch 'main' into maisi_readme
Can-Zhao 3fea7e3
Merge branch 'main' into maisi_readme
guopengf 9e0c8df
update controlnet readme
guopengf 3f18e85
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] bd030b7
Merge branch 'main' into maisi_readme
mingxin-zheng 6aee4ff
Merge branch 'main' into maisi_readme
Can-Zhao 3d94f1a
Merge branch 'Project-MONAI:main' into maisi_readme
Can-Zhao 7ebfba7
Merge branch 'Project-MONAI:main' into maisi_readme
Can-Zhao 7ea4767
Update readme for highlight, infer, and vae
Can-Zhao 746cfa8
Update readme for highlight, infer, and vae
Can-Zhao 914022c
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] b767492
Update readme for highlight, infer, and vae
Can-Zhao 73f2edc
Merge branch 'maisi_readme' of https://github.com/Can-Zhao/tutorials …
Can-Zhao 54bf9d8
update controlnet readme
guopengf 9aadd0f
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] cba35c0
update controlnet readme
guopengf 279f8ea
update vae readme, update vae botebook
Can-Zhao fc6e869
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] 1b02466
Merge branch 'main' into maisi_readme
mingxin-zheng d1bdd4c
resolve conflict
Can-Zhao 383e0dd
Merge branch 'maisi_readme' of https://github.com/Can-Zhao/tutorials …
Can-Zhao 3c41b60
add description about ARM64
Can-Zhao 8d56497
add description about ARM64
Can-Zhao 880945d
add description about VAE data
Can-Zhao 1cd46e8
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] aa7b524
Merge branch 'main' into maisi_readme
guopengf 23090c2
update readme
guopengf ae3875b
Merge branch 'main' into maisi_readme
guopengf 8d3e4f6
add detail info on vae data
Can-Zhao 4deda70
add detail info on vae data
Can-Zhao 5de4aa9
typo
Can-Zhao a1aaba5
typo
Can-Zhao 994008b
update controlnet part
guopengf 028fe7f
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] 37a48a5
Update generative/maisi/data/README.md
guopengf 09c741e
Merge branch 'main' into maisi_readme
KumoLiu 43c8542
update
dongyang0122 861ab9f
update
dongyang0122 452210e
update
dongyang0122 4cdc0e1
update
dongyang0122 4a36743
update
dongyang0122 4f89f54
update
dongyang0122 c0a4355
update
dongyang0122 ce271d5
update
dongyang0122 6a36781
update
dongyang0122 a655980
update
guopengf bb3d46a
update
guopengf 97738a7
update license
guopengf 7274da7
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,35 @@ | ||
NVIDIA License | ||
|
||
1. Definitions | ||
|
||
“Licensor” means any person or entity that distributes its Work. | ||
“Work” means (a) the original work of authorship made available under this license, which may include software, documentation, or other files, and (b) any additions to or derivative works thereof that are made available under this license. | ||
The terms “reproduce,” “reproduction,” “derivative works,” and “distribution” have the meaning as provided under U.S. copyright law; provided, however, that for the purposes of this license, derivative works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work. | ||
Works are “made available” under this license by including in or with the Work either (a) a copyright notice referencing the applicability of this license to the Work, or (b) a copy of this license. | ||
|
||
2. License Grant | ||
|
||
2.1 Copyright Grant. Subject to the terms and conditions of this license, each Licensor grants to you a perpetual, worldwide, non-exclusive, royalty-free, copyright license to use, reproduce, prepare derivative works of, publicly display, publicly perform, sublicense and distribute its Work and any resulting derivative works in any form. | ||
|
||
3. Limitations | ||
|
||
3.1 Redistribution. You may reproduce or distribute the Work only if (a) you do so under this license, (b) you include a complete copy of this license with your distribution, and (c) you retain without modification any copyright, patent, trademark, or attribution notices that are present in the Work. | ||
|
||
3.2 Derivative Works. You may specify that additional or different terms apply to the use, reproduction, and distribution of your derivative works of the Work (“Your Terms”) only if (a) Your Terms provide that the use limitation in Section 3.3 applies to your derivative works, and (b) you identify the specific derivative works that are subject to Your Terms. Notwithstanding Your Terms, this license (including the redistribution requirements in Section 3.1) will continue to apply to the Work itself. | ||
|
||
3.3 Use Limitation. The Work and any derivative works thereof only may be used or intended for use non-commercially. Notwithstanding the foregoing, NVIDIA Corporation and its affiliates may use the Work and any derivative works commercially. As used herein, “non-commercially” means for research or evaluation purposes only. | ||
|
||
3.4 Patent Claims. If you bring or threaten to bring a patent claim against any Licensor (including any claim, cross-claim or counterclaim in a lawsuit) to enforce any patents that you allege are infringed by any Work, then your rights under this license from such Licensor (including the grant in Section 2.1) will terminate immediately. | ||
|
||
3.5 Trademarks. This license does not grant any rights to use any Licensor’s or its affiliates’ names, logos, or trademarks, except as necessary to reproduce the notices described in this license. | ||
|
||
3.6 Termination. If you violate any term of this license, then your rights under this license (including the grant in Section 2.1) will terminate immediately. | ||
|
||
4. Disclaimer of Warranty. | ||
|
||
THE WORK IS PROVIDED “AS IS” WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OR CONDITIONS OF | ||
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE OR NON-INFRINGEMENT. YOU BEAR THE RISK OF UNDERTAKING ANY ACTIVITIES UNDER THIS LICENSE. | ||
|
||
5. Limitation of Liability. | ||
|
||
EXCEPT AS PROHIBITED BY APPLICABLE LAW, IN NO EVENT AND UNDER NO LEGAL THEORY, WHETHER IN TORT (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE SHALL ANY LICENSOR BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF OR RELATED TO THIS LICENSE, THE USE OR INABILITY TO USE THE WORK (INCLUDING BUT NOT LIMITED TO LOSS OF GOODWILL, BUSINESS INTERRUPTION, LOST PROFITS OR DATA, COMPUTER FAILURE OR MALFUNCTION, OR ANY OTHER DAMAGES OR LOSSES), EVEN IF THE LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,90 @@ | ||
# Medical AI for Synthetic Imaging (MAISI) | ||
This example demonstrates the applications of training and validating NVIDIA MAISI, a 3D Latent Diffusion Model (LDM) capable of generating large CT images accompanied by corresponding segmentation masks. It supports variable volume size and voxel spacing and allows for the precise control of organ/tumor size. | ||
|
||
## MAISI Model Highlight | ||
- A Foundation Variational Auto-Encoder (VAE) model for latent feature compression that works for both CT and MRI with flexible volume size and voxel size | ||
- A Foundation Diffusion model that can generate large CT volumes up to 512 × 512 × 768 size, with flexible volume size and voxel size | ||
- A ControlNet to generate image/mask pairs that can improve downstream tasks, with controllable organ/tumor size | ||
|
||
## Example Results and Evaluation | ||
|
||
## MAISI Model Workflow | ||
The training and inference workflows of MAISI are depicted in the figure below. It begins by training an autoencoder in pixel space to encode images into latent features. Following that, it trains a diffusion model in the latent space to denoise the noisy latent features. During inference, it first generates latent features from random noise by applying multiple denoising steps using the trained diffusion model. Finally, it decodes the denoised latent features into images using the trained autoencoder. | ||
<p align="center"> | ||
<img src="./figures/maisi_train.jpg" alt="MAISI training scheme"> | ||
<br> | ||
<em>Figure 1: MAISI training scheme</em> | ||
</p> | ||
|
||
<p align="center"> | ||
<img src="./figures/maisi_infer.jpg" alt="MAISI inference scheme") | ||
<br> | ||
<em>Figure 2: MAISI inference scheme</em> | ||
</p> | ||
MAISI is based on the following papers: | ||
|
||
[**Latent Diffusion:** Rombach, Robin, et al. "High-resolution image synthesis with latent diffusion models." CVPR 2022.](https://openaccess.thecvf.com/content/CVPR2022/papers/Rombach_High-Resolution_Image_Synthesis_With_Latent_Diffusion_Models_CVPR_2022_paper.pdf) | ||
|
||
[**ControlNet:** Lvmin Zhang, Anyi Rao, Maneesh Agrawala; “Adding Conditional Control to Text-to-Image Diffusion Models.” ICCV 2023.](https://openaccess.thecvf.com/content/ICCV2023/papers/Zhang_Adding_Conditional_Control_to_Text-to-Image_Diffusion_Models_ICCV_2023_paper.pdf) | ||
|
||
### 1. Installation | ||
Please refer to the [Installation of MONAI Generative Model](../README.md). | ||
|
||
Note: MAISI depends on [xFormers](https://github.com/facebookresearch/xformers) library. | ||
ARM64 users can build xFormers from the [source](https://github.com/facebookresearch/xformers?tab=readme-ov-file#installing-xformers) if the available wheel does not meet their requirements. | ||
|
||
### 2. Model inference and example outputs | ||
Please refer to [maisi_inference_tutorial.ipynb](maisi_inference_tutorial.ipynb) for the tutorial for MAISI model inference. | ||
|
||
### 3. Training example | ||
Training data preparation can be found in [./data/README.md](./data/README.md) | ||
|
||
#### [3.1 3D Autoencoder Training](./maisi_train_vae_tutorial.ipynb) | ||
|
||
Please refer to [maisi_train_vae_tutorial.ipynb](maisi_train_vae_tutorial.ipynb) for the tutorial for MAISI VAE model training. | ||
|
||
#### [3.2 3D Latent Diffusion Training](./scripts/diff_model_train.py) | ||
|
||
Please refer to [maisi_diff_unet_training_tutorial.ipynb](maisi_diff_unet_training_tutorial.ipynb) for the tutorial for MAISI diffusion model training. | ||
|
||
#### [3.3 3D ControlNet Training](./scripts/train_controlnet.py) | ||
|
||
We provide a [training config](./configs/config_maisi_controlnet_train.json) executing finetuning for pretrained ControlNet with a new class (i.e., Kidney Tumor). | ||
When finetuning with other new class names, please update the `weighted_loss_label` in training config | ||
and [label_dict.json](./configs/label_dict.json) accordingly. There are 8 dummy labels as deletable placeholders in default `label_dict.json` that can be used for finetuning. Users may apply any placeholder labels for fine-tuning purpose. If there are more than 8 new labels needed in finetuning, users can freely define numeric label indices less than 256. The current ControlNet implementation can support up to 256 labels (0~255). | ||
Preprocessed dataset for ControlNet training and more details anout data preparation can be found in the [README](./data/README.md). | ||
|
||
#### Training Configuration | ||
The training was performed with the following: | ||
- GPU: at least 60GB GPU memory for 512 × 512 × 512 volume | ||
- Actual Model Input (the size of 3D image feature in latent space) for the latent diffusion model: 128 × 128 × 128 for 512 × 512 × 512 volume | ||
- AMP: True | ||
|
||
#### Execute Training: | ||
To train with a single GPU, please run: | ||
```bash | ||
python -m scripts.train_controlnet -c ./configs/config_maisi.json -t ./configs/config_maisi_controlnet_train.json -e ./configs/environment_maisi_controlnet_train.json -g 1 | ||
``` | ||
|
||
The training script also enables multi-GPU training. For instance, if you are using eight GPUs, you can run the training script with the following command: | ||
```bash | ||
export NUM_GPUS_PER_NODE=8 | ||
torchrun \ | ||
--nproc_per_node=${NUM_GPUS_PER_NODE} \ | ||
--nnodes=1 \ | ||
--master_addr=localhost --master_port=1234 \ | ||
-m scripts.train_controlnet -c ./configs/config_maisi.json -t ./configs/config_maisi_controlnet_train.json -e ./configs/environment_maisi_controlnet_train.json -g ${NUM_GPUS_PER_NODE} | ||
``` | ||
KumoLiu marked this conversation as resolved.
Show resolved
Hide resolved
|
||
Please also check [maisi_train_controlnet_tutorial.ipynb](./maisi_train_controlnet_tutorial.ipynb) for more details about data preparation and training parameters. | ||
|
||
### 4. License | ||
|
||
The code is released under Apache 2.0 License. | ||
|
||
The model weight is released under [NSCLv1 License](./LICENSE.weights). | ||
|
||
### 5. Questions and Bugs | ||
|
||
- For questions relating to the use of MONAI, please use our [Discussions tab](https://github.com/Project-MONAI/MONAI/discussions) on the main repository of MONAI. | ||
- For bugs relating to MONAI functionality, please create an issue on the [main repository](https://github.com/Project-MONAI/MONAI/issues). | ||
- For bugs relating to the running of a tutorial, please create an issue in [this repository](https://github.com/Project-MONAI/Tutorials/issues). |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,138 @@ | ||
# Medical AI for Synthetic Imaging (MAISI) Data Preparation | ||
|
||
Disclaimer: We are not the hosts of the data. Please make sure to read the requirements and usage policies of the data and give credit to the authors of the datasets! | ||
|
||
### 1 VAE training Data | ||
Can-Zhao marked this conversation as resolved.
Show resolved
Hide resolved
|
||
For the released Foundation autoencoder model weights in MAISI, we used 37243 CT training data and 1963 CT validation data from chest, abdomen, head and neck region; and 17887 MRI training data and 940 MRI validation data from brain, skull-stripped brain, chest, and below-abdomen region. The training data come from [TCIA Covid 19 Chest CT](https://wiki.cancerimagingarchive.net/display/Public/CT+Images+in+COVID-19#70227107b92475d33ae7421a9b9c426f5bb7d5b3), [TCIA Colon Abdomen CT](https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=3539213), [MSD03 Liver Abdomen CT](http://medicaldecathlon.com/), [LIDC chest CT](https://www.cancerimagingarchive.net/collection/lidc-idri/), [TCIA Stony Brook Covid Chest CT](https://www.cancerimagingarchive.net/collection/covid-19-ny-sbu/), [NLST Chest CT](https://www.cancerimagingarchive.net/collection/nlst/), [TCIA Upenn GBM Brain MR](https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=70225642), [Aomic Brain MR](https://openneuro.org/datasets/ds003097/versions/1.2.1), [QTIM Brain MR](https://openneuro.org/datasets/ds004169/versions/1.0.7), [TCIA Acrin Chest MR](https://www.cancerimagingarchive.net/collection/acrin-contralateral-breast-mr/), [TCIA Prostate MR Below-Abdomen MR](https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=68550661#68550661a2c52df5969d435eae49b9669bea21a6). | ||
|
||
In total, we included: | ||
| Index | Dataset Name | Number of Training Data | Number of Validation Data | | ||
|-------|------------------------------------------------|-------------------------|---------------------------| | ||
| 1 | Covid 19 Chest CT | 722 | 49 | | ||
| 2 | TCIA Colon Abdomen CT | 1522 | 77 | | ||
| 3 | MSD03 Liver Abdomen CT | 104 | 0 | | ||
| 4 | LIDC chest CT | 450 | 24 | | ||
| 5 | TCIA Stony Brook Covid Chest CT | 2644 | 139 | | ||
| 6 | NLST Chest CT | 31801 | 1674 | | ||
| 7 | TCIA Upenn GBM Brain MR (skull-stripped) | 2550 | 134 | | ||
| 8 | Aomic Brain MR | 2630 | 138 | | ||
| 9 | QTIM Brain MR | 1275 | 67 | | ||
| 10 | Acrin Chest MR | 6599 | 347 | | ||
| 11 | TCIA Prostate MR Below-Abdomen MR | 928 | 49 | | ||
| 12 | Aomic Brain MR, skull-stripped | 2630 | 138 | | ||
| 13 | QTIM Brain MR, skull-stripped | 1275 | 67 | | ||
| | Total CT | 37243 | 1963 | | ||
| | Total MRI | 17887 | 940 | | ||
|
||
|
||
### 2 Diffusion model training Data | ||
KumoLiu marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
The training dataset for the Diffusion model used in MAISI comprises 10,277 CT volumes from 24 distinct datasets, encompassing various body regions and disease patterns. | ||
|
||
The table below provides a summary of the number of volumes for each dataset. | ||
|
||
|Index| Dataset name|Number of volumes| | ||
|:-----|:-----|:-----| | ||
1 | AbdomenCT-1K | 789 | ||
2 | AeroPath | 15 | ||
3 | AMOS22 | 240 | ||
4 | autoPET23 | 200 | ||
5 | Bone-Lesion | 223 | ||
6 | BTCV | 48 | ||
7 | COVID-19 | 524 | ||
8 | CRLM-CT | 158 | ||
9 | CT-ORG | 94 | ||
10 | CTPelvic1K-CLINIC | 94 | ||
11 | LIDC | 422 | ||
12 | MSD Task03 | 88 | ||
13 | MSD Task06 | 50 | ||
14 | MSD Task07 | 224 | ||
15 | MSD Task08 | 235 | ||
16 | MSD Task09 | 33 | ||
17 | MSD Task10 | 87 | ||
18 | Multi-organ-Abdominal-CT | 65 | ||
19 | NLST | 3109 | ||
20 | Pancreas-CT | 51 | ||
21 | StonyBrook-CT | 1258 | ||
22 | TCIA_Colon | 1437 | ||
23 | TotalSegmentatorV2 | 654 | ||
24 | VerSe | 179 | ||
|
||
### 3 ControlNet model training Data | ||
|
||
#### 3.1 Example preprocessed dataset | ||
|
||
We provide the preprocessed subset of [C4KC-KiTS](https://www.cancerimagingarchive.net/collection/c4kc-kits/) dataset used in the finetuning config `environment_maisi_controlnet_train.json`. The dataset and corresponding JSON data list can be downloaded from [this link](https://drive.google.com/drive/folders/1iMStdYxcl26dEXgJEXOjkWvx-I2fYZ2u?usp=sharing) and should be saved in `maisi/dataset/` folder. | ||
|
||
The structure of example folder in the preprocessed dataset is: | ||
|
||
``` | ||
|-*arterial*.nii.gz # original image | ||
|-*arterial_emb*.nii.gz # encoded image embedding | ||
KiTS-000* --|-mask*.nii.gz # original labels | ||
|-mask_pseudo_label*.nii.gz # pseudo labels | ||
|-mask_combined_label*.nii.gz # combined mask of original and pseudo labels | ||
``` | ||
|
||
An example combined mask of original and pseudo labels is shown below: | ||
guopengf marked this conversation as resolved.
Show resolved
Hide resolved
|
||
 | ||
|
||
Please note that the label of Kidney Tumor is mapped to index `129` in this preprocessed dataset. The encoded image embedding is generated by provided `Autoencoder` in `./models/autoencoder_epoch273.pt` during preprocessing to save memory usage for training. The pseudo labels are generated by [VISTA 3D](https://github.com/Project-MONAI/VISTA). In addition, the dimension of each volume and corresponding pseudo label is resampled to the closest multiple of 128 (e.g., 128, 256, 384, 512, ...). | ||
|
||
The training workflow requires one JSON file to specify the image embedding and segmentation pairs. The example file is located in the `maisi/dataset/C4KC-KiTS_subset.json`. | ||
|
||
The JSON file has the following structure: | ||
```python | ||
{ | ||
"training": [ | ||
{ | ||
"image": "*/*arterial_emb*.nii.gz", # relative path to the image embedding file | ||
"label": "*/mask_combined_label*.nii.gz", # relative path to the combined label file | ||
"dim": [512, 512, 512], # the dimension of image | ||
"spacing": [1.0, 1.0, 1.0], # the spacing of image | ||
"top_region_index": [0, 1, 0, 0], # the top region index of the image | ||
"bottom_region_index": [0, 0, 0, 1], # the bottom region index of the image | ||
"fold": 0 # fold index for cross validation, fold 0 is used for training | ||
}, | ||
|
||
... | ||
] | ||
} | ||
``` | ||
|
||
#### 3.2 Controlnet full training datasets | ||
The ControlNet training dataset used in MAISI contains 6330 CT volumes (5058 and 1272 volumes are used for training and validation, respectively) across 20 datasets and covers different body regions and diseases. | ||
|
||
The table below summarizes the number of volumes for each dataset. | ||
|
||
|Index| Dataset name|Number of volumes| | ||
|:-----|:-----|:-----| | ||
1 | AbdomenCT-1K | 789 | ||
2 | AeroPath | 15 | ||
3 | AMOS22 | 240 | ||
4 | Bone-Lesion | 237 | ||
5 | BTCV | 48 | ||
6 | CT-ORG | 94 | ||
7 | CTPelvic1K-CLINIC | 94 | ||
8 | LIDC | 422 | ||
9 | MSD Task03 | 105 | ||
10 | MSD Task06 | 50 | ||
11 | MSD Task07 | 225 | ||
12 | MSD Task08 | 235 | ||
13 | MSD Task09 | 33 | ||
14 | MSD Task10 | 101 | ||
15 | Multi-organ-Abdominal-CT | 64 | ||
16 | Pancreas-CT | 51 | ||
17 | StonyBrook-CT | 1258 | ||
18 | TCIA_Colon | 1436 | ||
19 | TotalSegmentatorV2 | 654 | ||
20| VerSe | 179 | ||
|
||
### 4. Questions and bugs | ||
|
||
- For questions relating to the use of MONAI, please use our [Discussions tab](https://github.com/Project-MONAI/MONAI/discussions) on the main repository of MONAI. | ||
- For bugs relating to MONAI functionality, please create an issue on the [main repository](https://github.com/Project-MONAI/MONAI/issues). | ||
- For bugs relating to the running of a tutorial, please create an issue in [this repository](https://github.com/Project-MONAI/Tutorials/issues). | ||
|
||
### Reference | ||
[1] [Rombach, Robin, et al. "High-resolution image synthesis with latent diffusion models." CVPR 2022.](https://openaccess.thecvf.com/content/CVPR2022/papers/Rombach_High-Resolution_Image_Synthesis_With_Latent_Diffusion_Models_CVPR_2022_paper.pdf) |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.