Swin_unetr_btcv_segmentation #991
Replies: 18 comments 5 replies
-
Hi @JoonilHwang , thank for the post this. It seems there is a env compatible issue with metatensor, can you check your environment and see if it's the MetaTensor's issue? If not, can you provide some detailed information, such as which data is used (check if the data format matches to brats21 in the tutorial)? Or a simple code block to reproduce the error. |
Beta Was this translation helpful? Give feedback.
-
***@***.*** https://github.com/tangy5 , thank you for reply. I have check the monai updates and I use MONAI 1.0.0.
This is the environment I used.MONAI version: 1.0.0Numpy version: 1.23.1Pytorch version: 1.12.1MONAI flags: HAS_EXT = False, USE_COMPILED = False, USE_META_DICT = FalseMONAI rev id: 170093375ce29267e45681fcec09dfa856e1d7e7MONAI file: c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai_init_.py
Optional dependencies:Pytorch Ignite version: 0.4.10Nibabel version: 4.0.2scikit-image version: 0.19.3Pillow version: 9.2.0Tensorboard version: 2.10.1gdown version: 4.5.1TorchVision version: 0.13.1tqdm version: 4.64.1lmdb version: 1.3.0psutil version: 5.9.2pandas version: 1.5.0einops version: 0.5.0transformers version: 4.21.3mlflow version: 1.29.0pynrrd version: 1.0.0
This is the one of the data's header information.Output exceeds the size limit. Open the full output data in a text editor<class 'nibabel.nifti1.Nifti1Header'> object, endian='<'sizeof_hdr : 348data_type : b''db_name : b''extents : 0session_error : 0regular : b''dim_info : 0dim : [ 3 512 512 179 1 1 1 1]intent_p1 : 0.0intent_p2 : 0.0intent_p3 : 0.0intent_code : nonedatatype : int16bitpix : 16slice_start : 0pixdim : [-1. 1.057 1.057 3. 1. 1. 1. 1. ]vox_offset : 0.0scl_slope : nanscl_inter : nanslice_end : 0slice_code : unknownxyzt_units : 2cal_max : 0.0cal_min : 0.0...srow_y : [ -0. 1.057 0. -245.343]srow_z : [ 0. -0. 3. -1309.5]intent_name : b''magic : b'n+1'
This is the code where the error becomes... https://user-images.githubusercontent.com/115773420/195778233-d4ca91de-ded0-4e65-ab00-e4a8540619e6.png
Other parameters is same as in the swin_unetr_btcv_segmentation_3d
For my data sets I only want to segment breast so in the json file, the labels only have 0 for background and 1 for breast. Also for creating model I change out_channels=2, and in training set "to_onehot=2"
황준일 Hwang, Joon ilNuclear & Quantum Engineering, Medical Imaging & Radiotherapy Laboratory / Ph.D candidate34141 대전광역시 유성구 대학로 291 한국과학기술원(KAIST)Mobile 01071799466 Email ***@***.*** ***@***.***
…-----Original Message-----
From: "tangy5" ***@***.***>
To: "Project-MONAI/tutorials" ***@***.***>;
Cc: "JoonilHwang" ***@***.***>; "Mention" ***@***.***>;
Sent: 2022-10-14 (금) 15:06:21 (UTC+09:00)
Subject: Re: [Project-MONAI/tutorials] Swin_unetr_btcv_segmentation (Discussion #991)
Hi @JoonilHwang https://github.com/JoonilHwang , thank for the post this. It seems there is a compatible issue with metatensor with recent MONAI updates, can you check the environment if it's the MetaTensor's issue? If not, can you provide some detailed information, such as which data is used (check if the data format matches to brats21 in the tutorial)? Or a simple code block to reproduce the error.Thank you!
—Reply to this email directly, view it on GitHub #991 (comment), or unsubscribe https://github.com/notifications/unsubscribe-auth/A3TI73F7MAN4XIFKCUNIULTWDDZ5VANCNFSM6AAAAAARE5BVZ4.You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Hi @JoonilHwang , I see your env is monai 1.0.0, the version that support metatensor. The error logs show there are issues with the meta keys when loading data. The BTCV tutorial should be good, and it used CT images with NIFTI format. I guess the data meta information is the issue, can you check whether your breast data are loaded from Nifti format that includes meta dict? If no meta data information, you can adjust the |
Beta Was this translation helpful? Give feedback.
-
Hi @tangy5. Thank you for replying. I have change LoadImaged to set image_only to true than the AttributeError doesn't occur.
But however the valueError and RuntimeError still occur.
Below is the error message.
Training (X / X Steps) (loss=X.X): 0%| | 0/90 [00:00<?, ?it/s]`data_array` is not of type `MetaTensor, assuming affine to be identity.
`data_array` is not of type MetaTensor, assuming affine to be identity.
Exception in thread Thread-22:
Traceback (most recent call last):
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 91, in apply_transform
return _apply_transform(transform, data, unpack_items)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 55, in _apply_transform
return transform(parameters)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\croppad\dictionary.py", line 861, in __call__
self.randomize(label, fg_indices, bg_indices, image)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\croppad\dictionary.py", line 852, in randomize
self.cropper.randomize(label=label, fg_indices=fg_indices, bg_indices=bg_indices, image=image)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\croppad\array.py", line 1060, in randomize
self.centers = generate_pos_neg_label_crop_centers(
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\utils.py", line 520, in generate_pos_neg_label_crop_centers
centers.append(correct_crop_centers(center, spatial_size, label_spatial_shape, allow_smaller))
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\utils.py", line 447, in correct_crop_centers
raise ValueError("The size of the proposed random crop ROI is larger than the image size.")
ValueError: The size of the proposed random crop ROI is larger than the image size.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 91, in apply_transform
return _apply_transform(transform, data, unpack_items)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 55, in _apply_transform
return transform(parameters)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\compose.py", line 173, in __call__
input_ = apply_transform(_transform, input_, self.map_items, self.unpack_items, self.log_stats)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 118, in apply_transform
raise RuntimeError(f"applying transform {transform}") from e
RuntimeError: applying transform <monai.transforms.croppad.dictionary.RandCropByPosNegLabeld object at 0x0000023F8F0ED970>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\data\thread_buffer.py", line 48, in enqueue_values
for src_val in self.src:
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\torch\utils\data\dataloader.py", line 681, in __next__
data = self._next_data()
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\torch\utils\data\dataloader.py", line 721, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\torch\utils\data\_utils\fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\torch\utils\data\_utils\fetch.py", line 49, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\data\dataset.py", line 105, in __getitem__
return self._transform(index)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\data\dataset.py", line 863, in _transform
return super()._transform(index_)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\data\dataset.py", line 91, in _transform
return apply_transform(self.transform, data_i) if self.transform is not None else data_i
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 118, in apply_transform
raise RuntimeError(f"applying transform {transform}") from e
RuntimeError: applying transform <monai.transforms.compose.Compose object at 0x0000023F8F0EDD90>
Training (X / X Steps) (loss=X.X): 0%| | 0/90 [00:01<?, ?it/s]
Training (X / X Steps) (loss=X.X): 0%| | 0/90 [00:00<?, ?it/s]Exception in thread Thread-23:
Traceback (most recent call last):
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 91, in apply_transform
return _apply_transform(transform, data, unpack_items)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 55, in _apply_transform
return transform(parameters)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\croppad\dictionary.py", line 861, in __call__
self.randomize(label, fg_indices, bg_indices, image)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\croppad\dictionary.py", line 852, in randomize
self.cropper.randomize(label=label, fg_indices=fg_indices, bg_indices=bg_indices, image=image)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\croppad\array.py", line 1060, in randomize
self.centers = generate_pos_neg_label_crop_centers(
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\utils.py", line 520, in generate_pos_neg_label_crop_centers
centers.append(correct_crop_centers(center, spatial_size, label_spatial_shape, allow_smaller))
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\utils.py", line 447, in correct_crop_centers
raise ValueError("The size of the proposed random crop ROI is larger than the image size.")
ValueError: The size of the proposed random crop ROI is larger than the image size.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 91, in apply_transform
return _apply_transform(transform, data, unpack_items)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 55, in _apply_transform
return transform(parameters)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\compose.py", line 173, in __call__
input_ = apply_transform(_transform, input_, self.map_items, self.unpack_items, self.log_stats)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 118, in apply_transform
raise RuntimeError(f"applying transform {transform}") from e
RuntimeError: applying transform <monai.transforms.croppad.dictionary.RandCropByPosNegLabeld object at 0x0000023F8F0ED970>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\data\thread_buffer.py", line 48, in enqueue_values
for src_val in self.src:
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\torch\utils\data\dataloader.py", line 681, in __next__
data = self._next_data()
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\torch\utils\data\dataloader.py", line 721, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\torch\utils\data\_utils\fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\torch\utils\data\_utils\fetch.py", line 49, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\data\dataset.py", line 105, in __getitem__
return self._transform(index)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\data\dataset.py", line 863, in _transform
return super()._transform(index_)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\data\dataset.py", line 91, in _transform
return apply_transform(self.transform, data_i) if self.transform is not None else data_i
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 118, in apply_transform
raise RuntimeError(f"applying transform {transform}") from e
RuntimeError: applying transform <monai.transforms.compose.Compose object at 0x0000023F8F0EDD90>
Training (X / X Steps) (loss=X.X): 0%| | 0/90 [00:01<?, ?it/s]
Training (X / X Steps) (loss=X.X): 0%| | 0/90 [00:00<?, ?it/s]Exception in thread Thread-24:
Traceback (most recent call last):
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 91, in apply_transform
return _apply_transform(transform, data, unpack_items)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 55, in _apply_transform
return transform(parameters)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\croppad\dictionary.py", line 861, in __call__
self.randomize(label, fg_indices, bg_indices, image)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\croppad\dictionary.py", line 852, in randomize
self.cropper.randomize(label=label, fg_indices=fg_indices, bg_indices=bg_indices, image=image)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\croppad\array.py", line 1060, in randomize
self.centers = generate_pos_neg_label_crop_centers(
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\utils.py", line 520, in generate_pos_neg_label_crop_centers
centers.append(correct_crop_centers(center, spatial_size, label_spatial_shape, allow_smaller))
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\utils.py", line 447, in correct_crop_centers
raise ValueError("The size of the proposed random crop ROI is larger than the image size.")
ValueError: The size of the proposed random crop ROI is larger than the image size.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 91, in apply_transform
return _apply_transform(transform, data, unpack_items)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 55, in _apply_transform
return transform(parameters)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\compose.py", line 173, in __call__
input_ = apply_transform(_transform, input_, self.map_items, self.unpack_items, self.log_stats)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 118, in apply_transform
raise RuntimeError(f"applying transform {transform}") from e
RuntimeError: applying transform <monai.transforms.croppad.dictionary.RandCropByPosNegLabeld object at 0x0000023F8F0ED970>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\data\thread_buffer.py", line 48, in enqueue_values
for src_val in self.src:
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\torch\utils\data\dataloader.py", line 681, in __next__
data = self._next_data()
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\torch\utils\data\dataloader.py", line 721, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\torch\utils\data\_utils\fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\torch\utils\data\_utils\fetch.py", line 49, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\data\dataset.py", line 105, in __getitem__
return self._transform(index)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\data\dataset.py", line 863, in _transform
return super()._transform(index_)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\data\dataset.py", line 91, in _transform
return apply_transform(self.transform, data_i) if self.transform is not None else data_i
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 118, in apply_transform
raise RuntimeError(f"applying transform {transform}") from e
RuntimeError: applying transform <monai.transforms.compose.Compose object at 0x0000023F8F0EDD90>
Training (X / X Steps) (loss=X.X): 0%| | 0/90 [00:01<?, ?it/s]
Training (X / X Steps) (loss=X.X): 0%| | 0/90 [00:00<?, ?it/s]Exception in thread Thread-25:
Traceback (most recent call last):
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 91, in apply_transform
return _apply_transform(transform, data, unpack_items)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 55, in _apply_transform
return transform(parameters)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\croppad\dictionary.py", line 861, in __call__
self.randomize(label, fg_indices, bg_indices, image)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\croppad\dictionary.py", line 852, in randomize
self.cropper.randomize(label=label, fg_indices=fg_indices, bg_indices=bg_indices, image=image)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\croppad\array.py", line 1060, in randomize
self.centers = generate_pos_neg_label_crop_centers(
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\utils.py", line 520, in generate_pos_neg_label_crop_centers
centers.append(correct_crop_centers(center, spatial_size, label_spatial_shape, allow_smaller))
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\utils.py", line 447, in correct_crop_centers
raise ValueError("The size of the proposed random crop ROI is larger than the image size.")
ValueError: The size of the proposed random crop ROI is larger than the image size.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 91, in apply_transform
return _apply_transform(transform, data, unpack_items)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 55, in _apply_transform
return transform(parameters)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\compose.py", line 173, in __call__
input_ = apply_transform(_transform, input_, self.map_items, self.unpack_items, self.log_stats)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 118, in apply_transform
raise RuntimeError(f"applying transform {transform}") from e
RuntimeError: applying transform <monai.transforms.croppad.dictionary.RandCropByPosNegLabeld object at 0x0000023F8F0ED970>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\data\thread_buffer.py", line 48, in enqueue_values
for src_val in self.src:
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\torch\utils\data\dataloader.py", line 681, in __next__
data = self._next_data()
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\torch\utils\data\dataloader.py", line 721, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\torch\utils\data\_utils\fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\torch\utils\data\_utils\fetch.py", line 49, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\data\dataset.py", line 105, in __getitem__
return self._transform(index)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\data\dataset.py", line 863, in _transform
return super()._transform(index_)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\data\dataset.py", line 91, in _transform
return apply_transform(self.transform, data_i) if self.transform is not None else data_i
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 118, in apply_transform
raise RuntimeError(f"applying transform {transform}") from e
RuntimeError: applying transform <monai.transforms.compose.Compose object at 0x0000023F8F0EDD90>
Training (1 / 30000 Steps) (loss=1.43538): 2%|▏ | 2/90 [00:15<11:36, 7.91s/it]
Training (X / X Steps) (loss=X.X): 0%| | 0/90 [00:00<?, ?it/s]Exception in thread Thread-26:
I guess it is the matter of the spatial size in the swin_unetr
But I used for the transforms composed as belowed.
And the for net work
I have check the image size is more than (512,512, XX(over100))
Thanks.
황준일 Hwang, Joon ilNuclear & Quantum Engineering, Medical Imaging & Radiotherapy Laboratory / Ph.D candidate34141 대전광역시 유성구 대학로 291 한국과학기술원(KAIST)Mobile 01071799466 Email ***@***.*** ***@***.***
…-----Original Message-----
From: "tangy5" ***@***.***>
To: "Project-MONAI/tutorials" ***@***.***>;
Cc: "JoonilHwang" ***@***.***>; "Mention" ***@***.***>;
Sent: 2022-10-18 (화) 14:22:57 (UTC+09:00)
Subject: Re: [Project-MONAI/tutorials] Swin_unetr_btcv_segmentation (Discussion #991)
Hi @JoonilHwang https://github.com/JoonilHwang , I see your env is monai 1.0.0, the version that support metatensor. The error logs show there are issues with the meta keys when loading data. The BTCV tutorial should be good, and it used CT images with NIFTI format. I guess the data meta information is the issue, can you check whether your breast data are loaded from Nifti format that includes meta dict? If no meta data information, you can adjust the LoadImaged referring to: https://docs.monai.io/en/stable/transforms.html#loadimaged https://docs.monai.io/en/stable/transforms.html#loadimagedto set image_only to true.Thanks.
—Reply to this email directly, view it on GitHub #991 (comment), or unsubscribe https://github.com/notifications/unsubscribe-auth/A3TI73DUTWMMURPRHQQ4SRTWDYX25ANCNFSM6AAAAAARE5BVZ4.You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
@JoonilHwang , I saw this error log: One possible solution is to set a higher resolution so that image dimension is larger than the crop ROI size:
The other option is to add a SpatialPadd transform before cropping so that it guarantee the least dimension is the cropping ROI size. |
Beta Was this translation helpful? Give feedback.
-
@tangy5, thanks for reply. I have change the Spacingd's pixdim for [0.7,0.7,3]
and check the image size after the transform
here is the result and I think the size of the image is bigger than the spatial_size(96,96,96)
But in the train still the ValueError: The size of the proposed random crop ROI is larger than the image size.
occurs....
Best, Joonil
황준일 Hwang, Joon ilNuclear & Quantum Engineering, Medical Imaging & Radiotherapy Laboratory / Ph.D candidate34141 대전광역시 유성구 대학로 291 한국과학기술원(KAIST)Mobile 01071799466 Email ***@***.*** ***@***.***
…-----Original Message-----
From: "tangy5" ***@***.***>
To: "Project-MONAI/tutorials" ***@***.***>;
Cc: "JoonilHwang" ***@***.***>; "Mention" ***@***.***>;
Sent: 2022-10-19 (수) 01:00:40 (UTC+09:00)
Subject: Re: [Project-MONAI/tutorials] Swin_unetr_btcv_segmentation (Discussion #991)
@JoonilHwang https://github.com/JoonilHwang , I saw this error log:ValueError: The size of the proposed random crop ROI is larger than the image size. It seems you data input dimension is smaller than the crop transformation ROI, since you mentioned the data original size is more than (512, 512, xx), you could check if the image Spacingd resampled the data to a smaller size. Then cropping a smaller size cause the error.
One possible solution is to set a higher resolution so that image dimension is larger than the crop ROI size:For example set isotropic 1.0 mm in the following Spacingd
Spacingd(
keys=["image", "label"],
pixdim=(1.5, 1.5, 2.0),
mode=("bilinear", "nearest"),
),
The other option is to add a SpatialPadd transform before cropping so that it guarantee the least dimension is the cropping ROI size.The spatial size in the swin_unetr is for choosing training 2D or 3D data.
—Reply to this email directly, view it on GitHub #991 (comment), or unsubscribe https://github.com/notifications/unsubscribe-auth/A3TI73CB5JJGH4EED5JJ3BTWD3CSLANCNFSM6AAAAAARE5BVZ4.You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Thanks!! The problem of ValueError solved!!
But there occurs typeErrors...
Training (X / X Steps) (loss=X.X): 0%| | 0/90 [00:00<?, ?it/s]`data_array` is not of type `MetaTensor, assuming affine to be identity.
`data_array` is not of type MetaTensor, assuming affine to be identity.
Exception in thread Thread-23:
Traceback (most recent call last):
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\croppad\array.py", line 184, in __call__
out = _pad(img_t, pad_width=to_pad_, mode=mode_, **kwargs_)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\croppad\array.py", line 129, in _np_pad
out = torch.as_tensor(np.pad(img, pad_width, mode=mode, **kwargs))
File "<__array_function__ internals>", line 180, in pad
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages
umpy\lib\arraypad.py", line 736, in pad
array = np.asarray(array)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\torch\_tensor.py", line 757, in __array__
return self.numpy()
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 91, in apply_transform
return _apply_transform(transform, data, unpack_items)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 55, in _apply_transform
return transform(parameters)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\croppad\dictionary.py", line 147, in __call__
d[key] = self.padder(d[key], mode=m)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\croppad\array.py", line 189, in __call__
raise ValueError(f"{mode_}, {kwargs_}, {img_t.dtype}, {img_t.device}") from err
ValueError: constant, {}, torch.float32, cuda:0
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 91, in apply_transform
return _apply_transform(transform, data, unpack_items)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 55, in _apply_transform
return transform(parameters)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\compose.py", line 173, in __call__
input_ = apply_transform(_transform, input_, self.map_items, self.unpack_items, self.log_stats)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 118, in apply_transform
raise RuntimeError(f"applying transform {transform}") from e
RuntimeError: applying transform <monai.transforms.croppad.dictionary.SpatialPadd object at 0x0000022A412D7E50>
above is the error message. should I have to change my datasets in to int16??
Best, Joonil
황준일 Hwang, Joon ilNuclear & Quantum Engineering, Medical Imaging & Radiotherapy Laboratory / Ph.D candidate34141 대전광역시 유성구 대학로 291 한국과학기술원(KAIST)Mobile 01071799466 Email ***@***.*** ***@***.***
…-----Original Message-----
From: "tangy5" ***@***.***>
To: "Project-MONAI/tutorials" ***@***.***>;
Cc: "JoonilHwang" ***@***.***>; "Mention" ***@***.***>;
Sent: 2022-10-20 (목) 13:34:33 (UTC+09:00)
Subject: Re: [Project-MONAI/tutorials] Swin_unetr_btcv_segmentation (Discussion #991)
Thanks for the updates, did you tried to add SpatialPadd before crop, the ValueError: The size of the proposed random crop ROI is larger than the image size should related to the incorrect image size.
—Reply to this email directly, view it on GitHub #991 (reply in thread), or unsubscribe https://github.com/notifications/unsubscribe-auth/A3TI73BTXQMKMLHN57WBLOLWEDDVNANCNFSM6AAAAAARE5BVZ4.You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
@JoonilHwang It seems there are some errors when converting numpy array and Tensor back and forth. You might need to double check the transforms. Thanks. |
Beta Was this translation helpful? Give feedback.
-
@tangy5, I don't know why it doesn't work but, when I use the Monai-tutorial data it works
but for the case of my data it doesn't... I set all dataset as like in tutorial data.(and convert it in nii.gz files)
but there comes errors.
Training (0 / 30000 Steps) (loss=1.19929): 1%| | 1/90 [00:01<02:06, 1.42s/it]`data_array` is not of type `MetaTensor, assuming affine to be identity.`data_array` is not of type MetaTensor, assuming affine to be identity.Exception in thread Thread-35:Traceback (most recent call last): File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\croppad\array.py", line 184, in __call__ out = _pad(img_t, pad_width=to_pad_, mode=mode_, **kwargs_) File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\croppad\array.py", line 129, in _np_pad out = torch.as http://torch.as/_tensor(np.pad(img, pad_width, mode=mode, **kwargs)) File "<__array_function__ internals>", line 180, in pad File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages
umpy\lib\arraypad.py", line 736, in pad array = np.asarray(array) File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\torch\_tensor.py", line 757, in __array__ return self.numpy()TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.The above exception was the direct cause of the following exception:Traceback (most recent call last): File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 91, in apply_transform return _apply_transform(transform, data, unpack_items) File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 55, in _apply_transform return transform(parameters) File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\croppad\dictionary.py", line 147, in __call__ d[key] = self.padder(d[key], mode=m) File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\croppad\array.py", line 189, in __call__ raise ValueError(f"{mode_}, {kwargs_}, {img_t.dtype}, {img_t.device}") from errValueError: constant, {}, torch.float32, cuda:0
thanks!
황준일 Hwang, Joon ilNuclear & Quantum Engineering, Medical Imaging & Radiotherapy Laboratory / Ph.D candidate34141 대전광역시 유성구 대학로 291 한국과학기술원(KAIST)Mobile 01071799466 Email ***@***.*** ***@***.***
…-----Original Message-----
From: "tangy5" ***@***.***>
To: "Project-MONAI/tutorials" ***@***.***>;
Cc: "JoonilHwang" ***@***.***>; "Mention" ***@***.***>;
Sent: 2022-10-20 (목) 14:33:04 (UTC+09:00)
Subject: Re: [Project-MONAI/tutorials] Swin_unetr_btcv_segmentation (Discussion #991)
@JoonilHwang https://github.com/JoonilHwang It seems there are some errors when converting numpy array and Tensor back and forth. You might need to double check the transforms.I would suggest to start with some tutorial data and make sure the code runs smoothly, then debug and compare your own data with sample images. This can save lots of time.
Thanks.
—Reply to this email directly, view it on GitHub #991 (comment), or unsubscribe https://github.com/notifications/unsubscribe-auth/A3TI73G4A5354P2S3EDIKR3WEDKQ3ANCNFSM6AAAAAARE5BVZ4.You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
HI @JoonilHwang , thanks. |
Beta Was this translation helpful? Give feedback.
-
Thank you so much.
Below is the simple code I used for train.
<transforms>
num_samples = 4
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
train_transforms = Compose(
[
LoadImaged(keys=["image", "label"], image_only=True, ensure_channel_first=True),
ScaleIntensityRanged(
keys=["image"],
a_min=-175,
a_max=255,
b_min=0.0,
b_max=1.0,
clip=True,
),
CropForegroundd(keys=["image", "label"], source_key="image"),
Orientationd(keys=["image", "label"], axcodes="RAS"),
Spacingd(
keys=["image", "label"],
pixdim=(0.7, 0.7, 2.0),
mode=("bilinear", "nearest"),
),
EnsureTyped(keys=["image", "label"], device=device, track_meta=False),
SpatialPadd(keys=["image", "label"], spatial_size=(96,96,96)),
RandCropByPosNegLabeld(
keys=["image", "label"],
label_key="label",
spatial_size=(96, 96, 96),
pos=1,
neg=1,
num_samples=num_samples,
image_key="image",
image_threshold=0,
),
RandFlipd(
keys=["image", "label"],
spatial_axis=[0],
prob=0.10,
),
RandFlipd(
keys=["image", "label"],
spatial_axis=[1],
prob=0.10,
),
RandFlipd(
keys=["image", "label"],
spatial_axis=[2],
prob=0.10,
),
RandRotate90d(
keys=["image", "label"],
prob=0.10,
max_k=3,
),
RandShiftIntensityd(
keys=["image"],
offsets=0.10,
prob=0.50,
),
])
val_transforms = Compose(
[
LoadImaged(keys=["image", "label"], image_only=True,ensure_channel_first=True),
ScaleIntensityRanged(
keys=["image"], a_min=-175, a_max=255, b_min=0.0, b_max=1.0, clip=True
),
CropForegroundd(keys=["image", "label"], source_key="image"),
Orientationd(keys=["image", "label"], axcodes="RAS"),
Spacingd(
keys=["image", "label"],
pixdim=(0.7, 0.7, 2.0),
mode=("bilinear", "nearest"),
),
EnsureTyped(keys=["image", "label"], device=device, track_meta=True),
]
)
<data load>
data_dir = "./data/"
split_JSON = "dataset_0.json"
datasets = data_dir + split_JSON
datalist = load_decathlon_datalist(datasets, True, "training")
val_files = load_decathlon_datalist(datasets, True, "validation")
test_files = load_decathlon_datalist(datasets, True, "test")
train_ds = CacheDataset(
data=datalist,
transform=train_transforms,
cache_num=24,
cache_rate=1.0,
num_workers=8,
)
train_loader = ThreadDataLoader(train_ds, num_workers=0, batch_size=1, shuffle=True)
val_ds = CacheDataset(
data=val_files, transform=val_transforms, cache_num=6, cache_rate=1.0, num_workers=4
)
val_loader = ThreadDataLoader(val_ds, num_workers=0, batch_size=1)
set_track_meta(False)
<model>
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = SwinUNETR(
img_size=(96, 96, 96),
in_channels=1,
out_channels=2,
feature_size=48,
use_checkpoint=True,
).to(device)
<optimizer and loss function>
torch.backends.cudnn.benchmark = True
loss_function = DiceCELoss(to_onehot_y=True, softmax=True)
optimizer = torch.optim.AdamW(model.parameters(), lr=1e-4, weight_decay=1e-5)
scaler = torch.cuda.amp.GradScaler()
<training process>
def validation(epoch_iterator_val):
model.eval()
with torch.no_grad():
for step, batch in enumerate(epoch_iterator_val):
val_inputs, val_labels = (batch["image"].cuda(), batch["label"].cuda())
with torch.cuda.amp.autocast():
val_outputs = sliding_window_inference(val_inputs, (96, 96, 96), 4, model)
val_labels_list = decollate_batch(val_labels)
val_labels_convert = [
post_label(val_label_tensor) for val_label_tensor in val_labels_list
]
val_outputs_list = decollate_batch(val_outputs)
val_output_convert = [
post_pred(val_pred_tensor) for val_pred_tensor in val_outputs_list
]
dice_metric(y_pred=val_output_convert, y=val_labels_convert)
epoch_iterator_val.set_description(
"Validate (%d / %d Steps)" % (global_step, 10.0)
)
mean_dice_val = dice_metric.aggregate().item()
dice_metric.reset()
return mean_dice_val
def train(global_step, train_loader, dice_val_best, global_step_best):
model.train()
epoch_loss = 0
step = 0
epoch_iterator = tqdm(
train_loader, desc="Training (X / X Steps) (loss=X.X)", dynamic_ncols=True
)
for step, batch in enumerate(epoch_iterator):
step += 1
x, y = (batch["image"].cuda(), batch["label"].cuda())
with torch.cuda.amp.autocast():
logit_map = model(x)
loss = loss_function(logit_map, y)
scaler.scale(loss).backward()
epoch_loss += loss.item()
scaler.unscale_(optimizer)
scaler.step(optimizer)
scaler.update()
optimizer.zero_grad()
epoch_iterator.set_description(
"Training (%d / %d Steps) (loss=%2.5f)"
% (global_step, max_iterations, loss)
)
if (
global_step % eval_num == 0 and global_step != 0
) or global_step == max_iterations:
epoch_iterator_val = tqdm(
val_loader, desc="Validate (X / X Steps) (dice=X.X)", dynamic_ncols=True
)
dice_val = validation(epoch_iterator_val)
epoch_loss /= step
epoch_loss_values.append(epoch_loss)
metric_values.append(dice_val)
if dice_val > dice_val_best:
dice_val_best = dice_val
global_step_best = global_step
torch.save(
model.state_dict(), os.path.join('C:/Users/joonil/Desktop/best/', "best_metric_model.pth")
)
print(
"Model Was Saved ! Current Best Avg. Dice: {} Current Avg. Dice: {}".format(
dice_val_best, dice_val
)
)
else:
print(
"Model Was Not Saved ! Current Best Avg. Dice: {} Current Avg. Dice: {}".format(
dice_val_best, dice_val
)
)
global_step += 1
return global_step, dice_val_best, global_step_best
<training>
max_iterations = 30000
eval_num = 500
post_label = AsDiscrete(to_onehot=2)
post_pred = AsDiscrete(argmax=True, to_onehot=2)
dice_metric = DiceMetric(include_background=True, reduction="mean", get_not_nans=False)
global_step = 0
dice_val_best = 0.0
global_step_best = 0
epoch_loss_values = []
metric_values = []
while global_step < max_iterations:
global_step, dice_val_best, global_step_best = train(
global_step, train_loader, dice_val_best, global_step_best
)
#model.load_state_dict(torch.load(os.path.join('C:/Users/joonil/Desktop/best/', "best_metric_model.pth")))
below is my error code.
Training (0 / 30000 Steps) (loss=1.19929): 1%| | 1/90 [00:01<02:06, 1.42s/it]`data_array` is not of type `MetaTensor, assuming affine to be identity.
`data_array` is not of type MetaTensor, assuming affine to be identity.
Exception in thread Thread-35:
Traceback (most recent call last):
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\croppad\array.py", line 184, in __call__
out = _pad(img_t, pad_width=to_pad_, mode=mode_, **kwargs_)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\croppad\array.py", line 129, in _np_pad
out = torch.as_tensor(np.pad(img, pad_width, mode=mode, **kwargs))
File "<__array_function__ internals>", line 180, in pad
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages
umpy\lib\arraypad.py", line 736, in pad
array = np.asarray(array)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\torch\_tensor.py", line 757, in __array__
return self.numpy()
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 91, in apply_transform
return _apply_transform(transform, data, unpack_items)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 55, in _apply_transform
return transform(parameters)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\croppad\dictionary.py", line 147, in __call__
d[key] = self.padder(d[key], mode=m)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\croppad\array.py", line 189, in __call__
raise ValueError(f"{mode_}, {kwargs_}, {img_t.dtype}, {img_t.device}") from err
ValueError: constant, {}, torch.float32, cuda:0
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 91, in apply_transform
return _apply_transform(transform, data, unpack_items)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 55, in _apply_transform
return transform(parameters)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\compose.py", line 173, in __call__
input_ = apply_transform(_transform, input_, self.map_items, self.unpack_items, self.log_stats)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 118, in apply_transform
raise RuntimeError(f"applying transform {transform}") from e
RuntimeError: applying transform <monai.transforms.croppad.dictionary.SpatialPadd object at 0x0000022A412D7E50>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\data\thread_buffer.py", line 48, in enqueue_values
for src_val in self.src:
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\torch\utils\data\dataloader.py", line 681, in __next__
data = self._next_data()
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\torch\utils\data\dataloader.py", line 721, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\torch\utils\data\_utils\fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\torch\utils\data\_utils\fetch.py", line 49, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\data\dataset.py", line 105, in __getitem__
return self._transform(index)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\data\dataset.py", line 863, in _transform
return super()._transform(index_)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\data\dataset.py", line 91, in _transform
return apply_transform(self.transform, data_i) if self.transform is not None else data_i
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 118, in apply_transform
raise RuntimeError(f"applying transform {transform}") from e
RuntimeError: applying transform <monai.transforms.compose.Compose object at 0x0000022A4131C370>
Training (0 / 30000 Steps) (loss=1.19929): 1%| | 1/90 [00:02<03:30, 2.37s/it]
Training (X / X Steps) (loss=X.X): 0%| | 0/90 [00:00<?, ?it/s]Exception in thread Thread-36:
Traceback (most recent call last):
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\croppad\array.py", line 184, in __call__
out = _pad(img_t, pad_width=to_pad_, mode=mode_, **kwargs_)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\croppad\array.py", line 129, in _np_pad
out = torch.as_tensor(np.pad(img, pad_width, mode=mode, **kwargs))
File "<__array_function__ internals>", line 180, in pad
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages
umpy\lib\arraypad.py", line 736, in pad
array = np.asarray(array)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\torch\_tensor.py", line 757, in __array__
return self.numpy()
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 91, in apply_transform
return _apply_transform(transform, data, unpack_items)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 55, in _apply_transform
return transform(parameters)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\croppad\dictionary.py", line 147, in __call__
d[key] = self.padder(d[key], mode=m)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\croppad\array.py", line 189, in __call__
raise ValueError(f"{mode_}, {kwargs_}, {img_t.dtype}, {img_t.device}") from err
ValueError: constant, {}, torch.float32, cuda:0
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 91, in apply_transform
return _apply_transform(transform, data, unpack_items)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 55, in _apply_transform
return transform(parameters)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\compose.py", line 173, in __call__
input_ = apply_transform(_transform, input_, self.map_items, self.unpack_items, self.log_stats)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 118, in apply_transform
raise RuntimeError(f"applying transform {transform}") from e
RuntimeError: applying transform <monai.transforms.croppad.dictionary.SpatialPadd object at 0x0000022A412D7E50>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\data\thread_buffer.py", line 48, in enqueue_values
for src_val in self.src:
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\torch\utils\data\dataloader.py", line 681, in __next__
data = self._next_data()
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\torch\utils\data\dataloader.py", line 721, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\torch\utils\data\_utils\fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\torch\utils\data\_utils\fetch.py", line 49, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\data\dataset.py", line 105, in __getitem__
return self._transform(index)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\data\dataset.py", line 863, in _transform
return super()._transform(index_)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\data\dataset.py", line 91, in _transform
return apply_transform(self.transform, data_i) if self.transform is not None else data_i
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 118, in apply_transform
raise RuntimeError(f"applying transform {transform}") from e
RuntimeError: applying transform <monai.transforms.compose.Compose object at 0x0000022A4131C370>
Training (X / X Steps) (loss=X.X): 0%| | 0/90 [00:02<?, ?it/s]
Training (1 / 30000 Steps) (loss=1.11670): 1%| | 1/90 [00:01<02:08, 1.45s/it]Exception in thread Thread-37:
Traceback (most recent call last):
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\croppad\array.py", line 184, in __call__
out = _pad(img_t, pad_width=to_pad_, mode=mode_, **kwargs_)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\croppad\array.py", line 129, in _np_pad
out = torch.as_tensor(np.pad(img, pad_width, mode=mode, **kwargs))
File "<__array_function__ internals>", line 180, in pad
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages
umpy\lib\arraypad.py", line 736, in pad
array = np.asarray(array)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\torch\_tensor.py", line 757, in __array__
return self.numpy()
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
The above exception was the direct cause of the following exception:
thanks!
황준일 Hwang, Joon ilNuclear & Quantum Engineering, Medical Imaging & Radiotherapy Laboratory / Ph.D candidate34141 대전광역시 유성구 대학로 291 한국과학기술원(KAIST)Mobile 01071799466 Email ***@***.*** ***@***.***
…-----Original Message-----
From: "tangy5" ***@***.***>
To: "Project-MONAI/tutorials" ***@***.***>;
Cc: "JoonilHwang" ***@***.***>; "Mention" ***@***.***>;
Sent: 2022-10-21 (금) 00:41:35 (UTC+09:00)
Subject: Re: [Project-MONAI/tutorials] Swin_unetr_btcv_segmentation (Discussion #991)
HI @JoonilHwang https://github.com/JoonilHwang , thanks.Like I mentioned above, the current error logs is less informative, I can see the errors are from transforms when converting data from array to tensor, but can't locate which transform. You could paste the code you used here and see if anyone is familiar with the error. You may also attach a minimum sample code and/or a sample data (or a synthetic) to reproduce the error. This will be convenient if I can't tell the problem, someone else will reply.Thanks again.
—Reply to this email directly, view it on GitHub #991 (comment), or unsubscribe https://github.com/notifications/unsubscribe-auth/A3TI73AKYRIO2S2A3ARNCZTWEFR23ANCNFSM6AAAAAARE5BVZ4.You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Not so sure, but you could move the EnsureTyped(keys=["image", "label"], device=device, track_meta=False) in train transform as the last transform. Your Validation transform might also need SpatialPadd. BTW, we can't explain every detail of the code, you can read and refer to the Doc here: https://docs.monai.io/en/stable/transforms.html?highlight=ensuretype#monai.transforms.EnsureType Thanks again for using the tutorial. |
Beta Was this translation helpful? Give feedback.
-
Thank you very much!!
Now the code runs well!
황준일 Hwang, Joon ilNuclear & Quantum Engineering, Medical Imaging & Radiotherapy Laboratory / Ph.D candidate34141 대전광역시 유성구 대학로 291 한국과학기술원(KAIST)Mobile 01071799466 Email ***@***.*** ***@***.***
…-----Original Message-----
From: "tangy5" ***@***.***>
To: "Project-MONAI/tutorials" ***@***.***>;
Cc: "JoonilHwang" ***@***.***>; "Mention" ***@***.***>;
Sent: 2022-10-21 (금) 10:25:21 (UTC+09:00)
Subject: Re: [Project-MONAI/tutorials] Swin_unetr_btcv_segmentation (Discussion #991)
Not so sure, but you could move the EnsureTyped(keys=["image", "label"], device=device, track_meta=False) before SpatialPadd, in train transform as the last transform. Your Validation transform might also need SpatialPadd.
—Reply to this email directly, view it on GitHub #991 (comment), or unsubscribe https://github.com/notifications/unsubscribe-auth/A3TI73CW6N5HM5222EJO2HTWEHWH5ANCNFSM6AAAAAARE5BVZ4.You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Wonderful, have a great day. |
Beta Was this translation helpful? Give feedback.
-
Hi, @tangy5 I have a new question...
I just finish training the network. However my original validation image size is [512,512,128]
and after transforms below
got array size about (1,1,146,175,128)
and these is the information of val_ds[0]["image"]
{'image': tensor([[[[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], [[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], [[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], ..., [[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], [[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], [[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]]]], device='cuda:0')Metadata sizeof_hdr: 348 extents: 0 session_error: 0 dim_info: 0 dim: [ 3 512 512 128 1 1 1 1] intent_p1: 0.0 intent_p2: 0.0 intent_p3: 0.0 intent_code: 0 datatype: 4 bitpix: 16 slice_start: 0 pixdim: [1. 1.367 1.367 3. 0. 0. 0. 0. ] vox_offset: 0.0 scl_slope: nan scl_inter: nan slice_end: 0 slice_code: 0 xyzt_units: 10 cal_max: 0.0 cal_min: 0.0 slice_duration: 0.0 toffset: 0.0 glmax: 0 glmin: 0 qform_code: 0 sform_code: 0 quatern_b: 0.0 quatern_c: 0.0 quatern_d: 0.0 qoffset_x: 0.0 qoffset_y: 0.0 qoffset_z: 0.0 srow_x: [0. 0. 0. 0.] srow_y: [0. 0. 0. 0.] srow_z: [0. 0. 0. 0.] affine: tensor([[ 1.5000, 0.0000, 0.0000, -67.6665], [ 0.0000, 1.5000, 0.0000, -223.5045], [ 0.0000, 0.0000, 2.0000, -85.5000], [ 0.0000, 0.0000, 0.0000, 1.0000]], dtype=torch.float64) original_affine: [[ -1.36699998 0. 0. 349.26849586] [ 0. 1.36699998 0. -349.26849586] [ 0. 0. 3. -190.5 ] [ 0. 0. 0. 1. ]] as_closest_canonical: False spatial_shape: [512 512 128] space: RAS original_channel_dim: no_channel filename_or_obj: data1\CBCT\101.nii.gzApplied operations[ { class: 'CropForeground', extra_info: { 'cropped': [146, 206, 92, 228, 35, 7], 'pad_info': { class: 'Pad', extra_info: {'padded': [(0, 0), (0, 0), (0, 0), (0, 0)]}, id: 2303663691712, orig_size: (160, 192, 86)}}, id: 2303663691760, orig_size: (512, 512, 128)}, { class: 'Orientation', extra_info: { 'original_affine': array([[ -1.36699998, 0. , 0. , 149.68649822], [ 0. , 1.36699998, 0. , -223.50449735], [ 0. , 0. , 3. , -85.5 ], [ 0. , 0. , 0. , 1. ]])}, id: 2303663691520, orig_size: (160, 192, 86)}, { class: 'SpatialResample', extra_info: { 'align_corners': False, 'dtype': 'float64', 'mode': 'bilinear', 'padding_mode': 'border', 'src_affine': tensor([[ 1.3670, 0.0000, 0.0000, -67.6665], [ 0.0000, 1.3670, 0.0000, -223.5045], [ 0.0000, 0.0000, 3.0000, -85.5000], [ 0.0000, 0.0000, 0.0000, 1.0000]], dtype=torch.float64)}, id: 2303663691136, orig_size: (160, 192, 86)}, { class: 'SpatialPad', extra_info: {'padded': [(0, 0), (0, 0), (0, 0), (0, 0)]}, id: 2303663691280, orig_size: (146, 175, 128)}]Is batch?: False, 'label': tensor([[[[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], [[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], [[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], ..., [[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], [[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], [[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]]]], device='cuda:0')Metadata sizeof_hdr: 348 extents: 0 session_error: 0 dim_info: 0 dim: [ 3 512 512 128 1 1 1 1] intent_p1: 0.0 intent_p2: 0.0 intent_p3: 0.0 intent_code: 0 datatype: 2 bitpix: 8 slice_start: 0 pixdim: [1. 1.367 1.367 3. 0. 0. 0. 0. ] vox_offset: 0.0 scl_slope: nan scl_inter: nan slice_end: 0 slice_code: 0 xyzt_units: 10 cal_max: 0.0 cal_min: 0.0 slice_duration: 0.0 toffset: 0.0 glmax: 0 glmin: 0 qform_code: 0 sform_code: 0 quatern_b: 0.0 quatern_c: 0.0 quatern_d: 0.0 qoffset_x: 0.0 qoffset_y: 0.0 qoffset_z: 0.0 srow_x: [0. 0. 0. 0.] srow_y: [0. 0. 0. 0.] srow_z: [0. 0. 0. 0.] affine: tensor([[ 1.5000, 0.0000, 0.0000, -67.6665], [ 0.0000, 1.5000, 0.0000, -223.5045], [ 0.0000, 0.0000, 2.0000, -85.5000], [ 0.0000, 0.0000, 0.0000, 1.0000]], dtype=torch.float64) original_affine: [[ -1.36699998 0. 0. 349.26849586] [ 0. 1.36699998 0. -349.26849586] [ 0. 0. 3. -190.5 ] [ 0. 0. 0. 1. ]] as_closest_canonical: False spatial_shape: [512 512 128] space: RAS original_channel_dim: no_channel filename_or_obj: data1\RTstruct\101.nii.gzApplied operations[ { class: 'CropForeground', extra_info: { 'cropped': [146, 206, 92, 228, 35, 7], 'pad_info': { class: 'Pad', extra_info: {'padded': [(0, 0), (0, 0), (0, 0), (0, 0)]}, id: 2303663691712, orig_size: (160, 192, 86)}}, id: 2303663691760, orig_size: (512, 512, 128)}, { class: 'Orientation', extra_info: { 'original_affine': array([[ -1.36699998, 0. , 0. , 149.68649822], [ 0. , 1.36699998, 0. , -223.50449735], [ 0. , 0. , 3. , -85.5 ], [ 0. , 0. , 0. , 1. ]])}, id: 2303663691520, orig_size: (160, 192, 86)}, { class: 'SpatialResample', extra_info: { 'align_corners': False, 'dtype': 'float64', 'mode': 'nearest', 'padding_mode': 'border', 'src_affine': tensor([[ 1.3670, 0.0000, 0.0000, -67.6665], [ 0.0000, 1.3670, 0.0000, -223.5045], [ 0.0000, 0.0000, 3.0000, -85.5000], [ 0.0000, 0.0000, 0.0000, 1.0000]], dtype=torch.float64)}, id: 2303663691136, orig_size: (160, 192, 86)}, { class: 'SpatialPad', extra_info: {'padded': [(0, 0), (0, 0), (0, 0), (0, 0)]}, id: 2303663691280, orig_size: (146, 175, 128)}]Is batch?: False, 'foreground_start_coord': array([146, 92, 35], dtype=int32), 'foreground_end_coord': array([306, 284, 121])}
Than how can I convert output result [146,175,128] in to the original size image[512,512,128]
below is 36th and 50th slice image of network output and original image
[36th]
[50th]
Thanks
Best,
Joonil
황준일 Hwang, Joon ilNuclear & Quantum Engineering, Medical Imaging & Radiotherapy Laboratory / Ph.D candidate34141 대전광역시 유성구 대학로 291 한국과학기술원(KAIST)Mobile 01071799466 Email ***@***.*** ***@***.***
…-----Original Message-----
From: "tangy5" ***@***.***>
To: "Project-MONAI/tutorials" ***@***.***>;
Cc: "JoonilHwang" ***@***.***>; "Mention" ***@***.***>;
Sent: 2022-10-21 (금) 11:17:20 (UTC+09:00)
Subject: Re: [Project-MONAI/tutorials] Swin_unetr_btcv_segmentation (Discussion #991)
Wonderful, have a great day.
—Reply to this email directly, view it on GitHub #991 (comment), or unsubscribe https://github.com/notifications/unsubscribe-auth/A3TI73GG5FEQOVHNMMSN46DWEH4K5ANCNFSM6AAAAAARE5BVZ4.You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Hi @JoonilHwang thanks for the follow-ups, it seems you are looking for the testing transform at this times. Yes, several post transforms are need to recover the prediction back to it's original image space. In addition to the validation transforms, you could add these transforms at the end.
An example of implementation, you could refer here: |
Beta Was this translation helpful? Give feedback.
-
Thanks for reply!
@tangy5, However the code post_transforms causes some errors...
These are the code I used (below)
and below is the error message..
황준일 Hwang, Joon ilNuclear & Quantum Engineering, Medical Imaging & Radiotherapy Laboratory / Ph.D candidate34141 대전광역시 유성구 대학로 291 한국과학기술원(KAIST)Mobile 01071799466 Email ***@***.*** ***@***.***
…-----Original Message-----
From: "tangy5" ***@***.***>
To: "Project-MONAI/tutorials" ***@***.***>;
Cc: "JoonilHwang" ***@***.***>; "Mention" ***@***.***>;
Sent: 2022-10-25 (화) 10:26:02 (UTC+09:00)
Subject: Re: [Project-MONAI/tutorials] Swin_unetr_btcv_segmentation (Discussion #991)
Hi @JoonilHwang https://github.com/JoonilHwang thanks for the follow-ups, it seems you are looking for the testing transform at this times. Yes, severl post transforms are need to recover the prediction back to it's original image space.
In addition to the validation transforms, you could add these transforms at the end.
post_transforms = Compose([
Invertd(
keys="pred",
transform=val_transforms,
orig_keys="image",
meta_keys="pred_meta_dict",
orig_meta_keys="image_meta_dict",
meta_key_postfix="meta_dict",
nearest_interp=False,
to_tensor=True,
),
AsDiscreted(keys="pred", argmax=True, to_onehot=2),
SaveImaged(keys="pred", meta_keys="pred_meta_dict", output_dir="./out", output_postfix="seg", resample=False),
])
An example of implementation, you could refer here:The section of Inference on Test Sethttps://github.com/Project-MONAI/tutorials/blob/main/3d_segmentation/spleen_segmentation_3d.ipynb https://github.com/Project-MONAI/tutorials/blob/main/3d_segmentation/spleen_segmentation_3d.ipynb
—Reply to this email directly, view it on GitHub #991 (comment), or unsubscribe https://github.com/notifications/unsubscribe-auth/A3TI73DO2SZHEV2MFFJFYXLWE4ZKHANCNFSM6AAAAAARE5BVZ4.You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Thanks for reply!
@tangy5, However the code post_transforms causes some errors...
These are the code I used (below)
val_transforms = Compose(
[
LoadImaged(keys=["image", "label"], image_only=True,ensure_channel_first=True),
ScaleIntensityRanged(
keys=["image"], a_min=-450, a_max=150, b_min=0.0, b_max=1.0, clip=True
),
CropForegroundd(keys=["image", "label"], source_key="image"),
Orientationd(keys=["image", "label"], axcodes="RAS"),
Spacingd(
keys=["image", "label"],
pixdim=(1.5, 1.5, 2.0),
mode=("bilinear", "nearest"),
),
SpatialPadd(keys=["image", "label"], spatial_size=(96,96,96)),
EnsureTyped(keys=["image", "label"], device=device, track_meta=True),
]
)
post_transforms = Compose(
[
Invertd(
keys="pred",
transform=val_transforms,
orig_keys="image",
meta_keys="pred_meta_dict",
orig_meta_keys="image_meta_dict",
meta_key_postfix="meta_dict",
nearest_interp=False,
to_tensor=True,
#allow_missing_keys=True,
),
AsDiscreted(keys="pred", argmax=True, to_onehot=2),
SaveImaged(keys="pred", meta_keys="pred_meta_dict", output_dir="./out", output_postfix="seg", resample=False),
]
)
data_dir = "./data/"
split_JSON = "dataset_0.json"
datasets = data_dir + split_JSON
datalist = load_decathlon_datalist(datasets, True, "training")
val_files = load_decathlon_datalist(datasets, True, "validation")
test_files = load_decathlon_datalist(datasets, True, "test")
train_ds = CacheDataset(
data=datalist,
transform=train_transforms,
cache_num=90,
cache_rate=1.0,
num_workers=8,
)
train_loader = ThreadDataLoader(train_ds, num_workers=0, batch_size=1, shuffle=True)
val_ds = CacheDataset(
data=val_files, transform=val_transforms, cache_num=10, cache_rate=1.0, num_workers=4
)
val_loader = ThreadDataLoader(val_ds, num_workers=0, batch_size=1)
set_track_meta(False)
model.load_state_dict(torch.load(os.path.join(root_dir, "best_metric_model.pth")))
model.eval()
with torch.no_grad():
img = val_ds[0]["image"]
#print(img.shape)
val_inputs = torch.unsqueeze(img, 1).cuda()
#print(val_inputs.shape)
val_ds1[0]["pred"] = torch.argmax(sliding_window_inference(val_inputs, (96, 96, 96), 4, model, overlap=0.8),dim=1).detach().cpu()
#print(val_ds[0]["pred"].shape)
val_ds1[0] = [post_transforms(i) for i in decollate_batch(val_ds1[0])]
val_outputs, val_labels = from_engine(["pred", "label"])(val_ds1[0])
#torch.argmax(val_outputs, dim=1).detach().cpu().numpy()[0, :, :,j]
print(val_ds1[0]["pred"].shape)
The error message is like below
None of the inputs have requires_grad=True. Gradients will be None
transform info of `image` is not available or no InvertibleTransform applied.
Output exceeds the size limit. Open the full output data in a text editor
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
File c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py:91, in apply_transform(transform, data, map_items, unpack_items, log_stats)
90 return [_apply_transform(transform, item, unpack_items) for item in data]
---> 91 return _apply_transform(transform, data, unpack_items)
92 except Exception as e:
93 # if in debug mode, don't swallow exception so that the breakpoint
94 # appears where the exception was raised.
File c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py:55, in _apply_transform(transform, parameters, unpack_parameters)
53 return transform(*parameters)
---> 55 return transform(parameters)
File c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\post\dictionary.py:189, in AsDiscreted.__call__(self, data)
186 for key, argmax, to_onehot, threshold, rounding in self.key_iterator(
187 d, self.argmax, self.to_onehot, self.threshold, self.rounding
188 ):
--> 189 d[key] = self.converter(d[key], argmax, to_onehot, threshold, rounding)
190 return d
File c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\post\array.py:209, in AsDiscrete.__call__(self, img, argmax, to_onehot, threshold, rounding)
208 raise AssertionError("the number of classes for One-Hot must be an integer.")
--> 209 img_t = one_hot(img_t, num_classes=to_onehot, dim=0)
211 threshold = self.threshold if threshold is None else threshold
...
116 else:
117 _log_stats(data=data)
--> 118 raise RuntimeError(f"applying transform {transform}") from e
RuntimeError: applying transform <monai.transforms.post.dictionary.AsDiscreted object at 0x000001C8088CDCA0>
Thanks!
황준일 Hwang, Joon ilNuclear & Quantum Engineering, Medical Imaging & Radiotherapy Laboratory / Ph.D candidate34141 대전광역시 유성구 대학로 291 한국과학기술원(KAIST)Mobile 01071799466 Email ***@***.*** ***@***.***
…-----Original Message-----
From: "tangy5" ***@***.***>
To: "Project-MONAI/tutorials" ***@***.***>;
Cc: "JoonilHwang" ***@***.***>; "Mention" ***@***.***>;
Sent: 2022-10-25 (화) 10:26:02 (UTC+09:00)
Subject: Re: [Project-MONAI/tutorials] Swin_unetr_btcv_segmentation (Discussion #991)
Hi @JoonilHwang https://github.com/JoonilHwang thanks for the follow-ups, it seems you are looking for the testing transform at this times. Yes, severl post transforms are need to recover the prediction back to it's original image space.
In addition to the validation transforms, you could add these transforms at the end.
post_transforms = Compose([
Invertd(
keys="pred",
transform=val_transforms,
orig_keys="image",
meta_keys="pred_meta_dict",
orig_meta_keys="image_meta_dict",
meta_key_postfix="meta_dict",
nearest_interp=False,
to_tensor=True,
),
AsDiscreted(keys="pred", argmax=True, to_onehot=2),
SaveImaged(keys="pred", meta_keys="pred_meta_dict", output_dir="./out", output_postfix="seg", resample=False),
])
An example of implementation, you could refer here:The section of Inference on Test Sethttps://github.com/Project-MONAI/tutorials/blob/main/3d_segmentation/spleen_segmentation_3d.ipynb https://github.com/Project-MONAI/tutorials/blob/main/3d_segmentation/spleen_segmentation_3d.ipynb
—Reply to this email directly, view it on GitHub #991 (comment), or unsubscribe https://github.com/notifications/unsubscribe-auth/A3TI73DO2SZHEV2MFFJFYXLWE4ZKHANCNFSM6AAAAAARE5BVZ4.You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi I'm trying working on swin_unetr_btcv segmentation with my data.
But I have some error recall while training...
The error message is like below
Training (X / X Steps) (loss=X.X): 0%| | 0/90 [00:00<?, ?it/s]Exception in thread Thread-22:
Traceback (most recent call last):
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 91, in apply_transform
return _apply_transform(transform, data, unpack_items)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 55, in _apply_transform
return transform(parameters)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\io\dictionary.py", line 154, in call
data = self._loader(d[key], reader)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\io\array.py", line 281, in call
return img, img.meta # for compatibility purpose
AttributeError: 'Tensor' object has no attribute 'meta'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 91, in apply_transform
return _apply_transform(transform, data, unpack_items)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 55, in apply_transform
return transform(parameters)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\compose.py", line 173, in call
input = apply_transform(transform, input, self.map_items, self.unpack_items, self.log_stats)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 118, in apply_transform
raise RuntimeError(f"applying transform {transform}") from e
RuntimeError: applying transform <monai.transforms.io.dictionary.LoadImaged object at 0x000002051B1C1E80>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\data\thread_buffer.py", line 48, in enqueue_values
for src_val in self.src:
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\torch\utils\data\dataloader.py", line 681, in next
data = self._next_data()
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\torch\utils\data\dataloader.py", line 721, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\torch\utils\data_utils\fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\torch\utils\data_utils\fetch.py", line 49, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\data\dataset.py", line 105, in getitem
return self._transform(index)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\data\dataset.py", line 863, in _transform
return super().transform(index)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\data\dataset.py", line 91, in _transform
return apply_transform(self.transform, data_i) if self.transform is not None else data_i
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 118, in apply_transform
raise RuntimeError(f"applying transform {transform}") from e
RuntimeError: applying transform <monai.transforms.compose.Compose object at 0x000002051B1CF7F0>
Training (0 / 30000 Steps) (loss=1.27800): 1%| | 1/90 [00:15<23:26, 15.80s/it]
Training (X / X Steps) (loss=X.X): 0%| | 0/90 [00:00<?, ?it/s]Exception in thread Thread-23:
Traceback (most recent call last):
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 91, in apply_transform
return _apply_transform(transform, data, unpack_items)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\transform.py", line 55, in _apply_transform
return transform(parameters)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\io\dictionary.py", line 154, in call
data = self._loader(d[key], reader)
File "c:\Users\joonil\anaconda3\envs\3dseg\lib\site-packages\monai\transforms\io\array.py", line 281, in call
return img, img.meta # for compatibility purpose
AttributeError: 'Tensor' object has no attribute 'meta'
Beta Was this translation helpful? Give feedback.
All reactions