-
Hi! In the tutorials, such as spleen segmentation, the model's predictions are generated through different methods in the training and validation phases. During training, the model's predictions can be directly obtained through the model's forward function. However, during evaluation, sliding window inference is used to generate the predictions. May I know why is there such a disparity? Could we use sliding window inference in both phases? As we are using two different ways to generate the model's output, wouldn't it affect the model's ability to perform? This is a concern since the output prediction space is different during training and validation. Thank you! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Hi @jxsoo1 , that's the training and validation strategies. In training, you could do sliding window to sample patches, but yes it is typically not as good as random sampling on the foregrounds. |
Beta Was this translation helpful? Give feedback.
Hi @jxsoo1 , that's the training and validation strategies.
Since radiology images such as CT scans are large (e.t., 512x512xhundres of slices). GPU can't take entire high resolution volume as input. Typically, when we are training model, we use cropped patches (sub-volumes) randomly sampled from an CT scan (96x96x96), and do prediction, loss backward based on this sub-volume. But in validation or inference, we need to conduct the prediction of all sub-volumes and ensemble to get a complete CT scan prediction, to achieve this, we need iterate all patches using the sliding window way.
In training, you could do sliding window to sample patches, but yes it is typically not as good as random …