Replies: 1 comment 3 replies
-
by default dice_metric.aggregate() will compute an average over all samples. so you can follow this change for the same validation loop: dice_metric(y_pred=val_outputs_convert, y=val_labels_convert)
- dice = dice_metric.aggregate().item()
- print(dice)
- dice_vals.append(dice)
- mean_dice_val= np.mean(dice_vals)
+ mean_dice_val= dice_metric.aggregate().item()
+ dice_metric.reset() # clear the buffer
print(mean_dice_val) |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, I'm following the code spleen_segmentation_3d.ipynb for my multi-label (two targets and background) segmentation task. When I checked the dice metrics of each samples in the validation set with the best saved model after training and validation, the average dice is inconsistent with the highest dice saved in the validation process. It is so strange that the highest dice which was saved in the training and validation process is same as as dice of the last sample in validation set when I checked each dices of samples in the validation set. I guess there must be something wrong in my code, could you please help me, thanks a lot.
The validation process in the training process:
Check the dice of each samples in the validation set
Beta Was this translation helpful? Give feedback.
All reactions