Skip to content

Commit 6932ca6

Browse files
committed
Update README.md
Signed-off-by: myron <[email protected]>
1 parent 1a0e245 commit 6932ca6

File tree

1 file changed

+111
-6
lines changed

1 file changed

+111
-6
lines changed

auto3dseg/tasks/kits23/README.md

Lines changed: 111 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ If you prefer to run the same thing from code (which will allow more customizati
4040
python example.py
4141
```
4242
```python
43-
# example.py file
43+
# example.py file content
4444

4545
from monai.apps.auto3dseg import AutoRunner
4646

@@ -52,15 +52,120 @@ if __name__ == '__main__':
5252
main()
5353
```
5454

55-
## Validation performance: NVIDIA DGX-1 (8x V100 32G)
55+
### Running from the code (more options)
56+
57+
AutoRunner class of Auto3DSeg is very flexible, and accepts parameters in various forms. For example instead of providing yaml file location (input.yaml) we can provide a dictionary directly, e.g.
58+
59+
```bash
60+
python example2.py
61+
```
62+
```python
63+
# example2.py file content
64+
65+
from monai.apps.auto3dseg import AutoRunner
66+
67+
def main():
68+
69+
input_dict = {
70+
"modality" : "CT",
71+
"dataroot" : "/data/kits23",
72+
"datalist" : "kits23_folds.json",
73+
"sigmoid" : True,
74+
"class_names":[
75+
{ "name": "kidney_and_mass", "index": [1,2,3] },
76+
{ "name": "mass", "index": [2,3] },
77+
{ "name": "tumor", "index": [2] }
78+
]
79+
}
80+
runner = AutoRunner(input=input_dict, algos = 'segresnet')
81+
runner.set_num_fold(1) # to train only 1 fold (instead of 5)
82+
runner.run()
83+
84+
if __name__ == '__main__':
85+
main()
86+
```
5687

57-
The validation results can be obtained by running the training script with MONAI 1.3.0 on NVIDIA DGX-1 with (8x V100 32GB) GPUs. The results below are in terms of average dice.
88+
The dictonary form of the input config is equivalent to the input.yaml. Notice, here we also added "runner.set_num_fold(1)" to train only 1 fold. By default the system determines number of fold based on the datalist.json file, which is 5 folds in this case, and train 5 models using cross-validation. However one can train only 1 model (fold0), which is much faster if only 1 output model is sufficient.
89+
90+
### Input.yaml options
91+
92+
Regardless if you prefer to use the yaml file form or an exmplicit dictionary config form, you can add many options manually to override the automatic defaults. For example consider the following input.yaml file.
93+
```yaml
94+
# input2.yaml file content example with more options
95+
96+
# KiTS23 Auto3DSeg user input
97+
98+
modality: CT
99+
dataroot: /data/kits23
100+
datalist: kits23_folds.json
101+
class_names:
102+
- { name: kidney_and_mass, index: [1,2,3] }
103+
- { name: mass, index: [2,3] }
104+
- { name: tumor, index: [2] }
105+
sigmoid: true
106+
107+
# additional options (OPTIONAL)
108+
auto_scale_allowed: false # disable auto scaling of some parameters to your GPU
109+
num_epochs: 600 # manually set number of training epochs to 600 (otherwise it's determined automatically)
110+
resample: true # explicitelly set to resample images to the resample_resolution (for KiTS it's already auto-detected to resample)
111+
resample_resolution: [0.78125, 0.78125, 0.78125] #set the resample resolution manually (the automated default here is 0.78x0.78x1)
112+
roi_size: [336, 336, 336] # set the cropping ROI size (for this large ROI, you may need a GPU with >40GB capacity), try smaller for your GPU
113+
loss: {_target_: DiceLoss} # change loss to be pure Dice (default is DiceCELoss)
114+
batch_size: 1 # batch size is automatically determined according to your GPU, but you can manually set it
115+
augment_mode: ct_ax_1 # change the default augmentation transform sequence to an alternative (with only inplane/axial spatial rotations and scaling)
116+
117+
```
118+
Here we added more optional options to manually fine-tune the performance. The full list of the available options to manually set can be observed [here](https://github.com/Project-MONAI/research-contributions/blob/main/auto3dseg/algorithm_templates/segresnet/configs/hyper_parameters.yaml)
119+
120+
### Input.yaml options and AutoRunner options combined
121+
122+
In the previous sections, we showed how to manually provide various input config options related to **training**. In the same file, once can also add AutoRunner related options, consider the following input3.yaml config
123+
```yaml
124+
# input2.yaml file content example with more options
125+
126+
# KiTS23 Auto3DSeg user input
127+
128+
modality: CT
129+
dataroot: /data/kits23
130+
datalist: kits23_folds.json
131+
class_names:
132+
- { name: kidney_and_mass, index: [1,2,3] }
133+
- { name: mass, index: [2,3] }
134+
- { name: tumor, index: [2] }
135+
sigmoid: true
136+
137+
# additional options (OPTIONAL)
138+
num_epochs: 600 # manually set number of training epochs to 600 (otherwise it's determined automatically)
139+
140+
# additional AutoRunner options (OPTIONAL)
141+
algos: segresnet
142+
num_fold: 1
143+
ensemble: false
144+
work_dir: tmp/tutorial_kits23
145+
146+
```
147+
Here we indicated to use only "segresnet" algo, and only 1 fold training, skip ensembling (since we train 1 model anyway), and change the default working directory. We can then run it simply as
148+
```bash
149+
python -m monai.apps.auto3dseg AutoRunner run --input=./input3.yaml
150+
```
151+
One may prefer this format, if they want to put all options in a single file, instead of having training options vs AutoRunner options separatelly. The end results will be the same.
152+
153+
### Command line options overrides
154+
155+
Finally, the command line form (one-liner) accepts arbitrary number of command line extra options (which will override the ones in the input.yaml file), for instance:
156+
```bash
157+
python -m monai.apps.auto3dseg AutoRunner run --input=./input3.yaml --work_dir=tmp/another --dataroot=/myown/kits/location --num_epochs=10
158+
```
159+
here the "work_dir", "dataroot", "num_epochs" options will override any defaults or any input.yaml provided options.
160+
161+
## Validation performance: NVIDIA DGX-1 (8x V100 32G)
58162

163+
Training on on 8 GPU V100 32GB DGX machine, one can expect to get an average Dice of 0.87-0.88 (for fold 0). The higher end accuracy is obrained if you set the ROI size to larger (e.g. roi_size: [336, 336, 336]), but
164+
this requires a large memory GPU device (such as A10 or A100). Alternatively you can experiment with training longer, e.g. by setting num_epochs=1200 (which will take longer).
59165

60-
| | Fold 0 | Fold 1 | Fold 2 | Fold 3 | Fold 4 | Avg |
61-
|:------:|:------:|:------:|:------:|:------:|:------:|:---:|
62-
| SegResNet | 0.8997 | 0.8739 | 0.8923 |0.8911 | 0.8892 |0.88924 |
166+
## Difference with 1st place KiTS23 solution
63167

168+
The example here is based on the 1st place KiTS23 solution [1], with the main differences beeing in [1] the training was done in 2 stages: first the apriximate Kidney region was detected (by training a model to segment the foreground), second an enseble of models were trained to segment the 3 KiTS subregions using the "Kidney subregion". In this tutorial, we train to segment KiTS subregions directly on the full image for simplicity (which gives a slightly lower average dice, ~1\%). Another difference is that in [1] and ensemble of several models were trained which included both segresnet and dints models, where as in this tutorial we focus only on segresnet.
64169

65170
## Data
66171

0 commit comments

Comments
 (0)