Skip to content

Commit c7eff80

Browse files
committed
Adding a 64GB limit for cross_entropy_large test
- https://ontrack-internal.amd.com/browse/SWDEV-373709 ROCm needs more memory and no GPU support with >64GB, so keeping this change on ROCm fork only.
1 parent 6945aa0 commit c7eff80

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

test/test_nn.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11883,7 +11883,8 @@ def check_equal(loss, inp_targ_1, inp_targ_2):
1188311883
# Ref: https://github.com/pytorch/pytorch/issue/85005
1188411884
@onlyCUDA
1188511885
@largeTensorTest("45GB", "cpu")
11886-
@largeTensorTest("45GB", "cuda")
11886+
# https://ontrack-internal.amd.com/browse/SWDEV-373709, ROCm needs more memory and no GPU support with >64GB
11887+
@largeTensorTest("64GB" if TEST_WITH_ROCM else "45GB", "cuda")
1188711888
@parametrize_test("reduction", ("none", "mean", "sum"))
1188811889
def test_cross_entropy_large_tensor(self, device, reduction):
1188911890
logits = torch.randn(int(2 ** 16), int(2 ** 16) + 1, dtype=torch.float32, device='cuda', requires_grad=True)

0 commit comments

Comments
 (0)