You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I am trying to train Auto3DSeg using the VerSe dataset. The GPU's available to me have 8GB and I believe that a larger one is needed to run Auto3DSeg. Now my training just stops after the first few epochs. However, I was wondering if there are ways to reduce the memory needed on the GPU. I already downsampled and cropped my data and this is enough for nnU-Net.
Another question is regarding the customizability of certain parameters. I noticed when training a U-Net with MONAI and Pytorch Lightning I needed to reduce the num_workers to 0 or 1. In Auto3DSeg I see that there are 8 workers on the train dataloader. Can I reduce this parameter? And if so how?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, I am trying to train Auto3DSeg using the VerSe dataset. The GPU's available to me have 8GB and I believe that a larger one is needed to run Auto3DSeg. Now my training just stops after the first few epochs. However, I was wondering if there are ways to reduce the memory needed on the GPU. I already downsampled and cropped my data and this is enough for nnU-Net.
Another question is regarding the customizability of certain parameters. I noticed when training a U-Net with MONAI and Pytorch Lightning I needed to reduce the num_workers to 0 or 1. In Auto3DSeg I see that there are 8 workers on the train dataloader. Can I reduce this parameter? And if so how?
Beta Was this translation helpful? Give feedback.
All reactions