Skip to content

Commit 4aadbc4

Browse files
committed
Adding a 64GB limit for cross_entropy_large test
- https://ontrack-internal.amd.com/browse/SWDEV-373709 ROCm needs more memory and no GPU support with >64GB, so keeping this change on ROCm fork only.
1 parent 2d9f3ec commit 4aadbc4

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

test/test_nn.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11554,7 +11554,8 @@ def check_equal(loss, inp_targ_1, inp_targ_2):
1155411554
# Ref: https://github.com/pytorch/pytorch/issue/85005
1155511555
@onlyCUDA
1155611556
@largeTensorTest("45GB", "cpu")
11557-
@largeTensorTest("45GB", "cuda")
11557+
# https://ontrack-internal.amd.com/browse/SWDEV-373709, ROCm needs more memory and no GPU support with >64GB
11558+
@largeTensorTest("64GB" if TEST_WITH_ROCM else "45GB", "cuda")
1155811559
@parametrize_test("reduction", ("none", "mean", "sum"))
1155911560
def test_cross_entropy_large_tensor(self, device, reduction):
1156011561
logits = torch.randn(int(2 ** 16), int(2 ** 16) + 1, dtype=torch.float32, device='cuda', requires_grad=True)

0 commit comments

Comments
 (0)