Replies: 1 comment 2 replies
-
Hi @jmlipman, since random transforms always generate different patches, if your epoch num is large, I think you've achieved what you want with the |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I have very large images (say, 3000x3000x1000) and I want to extract several patches from each image for training. I was doing this with RandCropByPosNegLabeld by setting the parameter num_samples. However, this parameter dominates over the batch size. In other words, if I set my batch_size in the DataLoader to 2 and I extract 100 patches via RandCropByPosNegLabeld's num_samples=100, during the training loop it will try to give me a tensor of size [100xCxHxWxD]. Below I put a minimum working example illustrating this issue.
Is there any way in which I can extract a lot of patches, keep them in memory after being augmented, and use them iteratively during training?
Thanks!
With BATCH_SIZE = 2, NUM_SAMPLES = 10
Output:
torch.Size([10, 1, 32, 32, 32])
Expected/desired output:
torch.Size([2, 1, 32, 32, 32])
torch.Size([2, 1, 32, 32, 32])
torch.Size([2, 1, 32, 32, 32])
torch.Size([2, 1, 32, 32, 32])
torch.Size([2, 1, 32, 32, 32])
Beta Was this translation helpful? Give feedback.
All reactions