Skip to content

Commit c845973

Browse files
steffenlarsenAlexeySachkov
authored andcommitted
[SYCL] Remove host run and dependencies from SYCL/Reduction tests (intel#1216)
This commit removes the host run and any assumptions and operations related to the host device from the tests in SYCL/Reduction. Signed-off-by: Larsen, Steffen <[email protected]> Co-authored-by: Sachkov, Alexey <[email protected]>
1 parent 1b517aa commit c845973

10 files changed

+7
-33
lines changed

SYCL/Reduction/reduction_big_data.cpp

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -3,14 +3,10 @@
33
// RUN: %ACC_RUN_PLACEHOLDER %t.out
44
// RUN: %CPU_RUN_PLACEHOLDER %t.out
55
//
6-
// `Group algorithms are not supported on host device` on Nvidia.
6+
// Group algorithms are not supported on Nvidia.
77
// XFAIL: hip_nvidia
88
//
99

10-
// RUNx: %HOST_RUN_PLACEHOLDER %t.out
11-
// TODO: Enable the test for HOST when it supports ext::oneapi::reduce() and
12-
// barrier()
13-
1410
// This test performs basic checks of parallel_for(nd_range, reduction, func)
1511
// where the bigger data size and/or non-uniform work-group sizes may cause
1612
// errors.

SYCL/Reduction/reduction_nd_N_queue_shortcut.cpp

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -3,13 +3,9 @@
33
// RUN: %ACC_RUN_PLACEHOLDER %t.out
44
// RUN: %CPU_RUN_PLACEHOLDER %t.out
55

6-
// `Group algorithms are not supported on host device.` on NVidia.
6+
// Group algorithms are not supported on NVidia.
77
// XFAIL: hip_nvidia
88

9-
// RUNx: %HOST_RUN_PLACEHOLDER %t.out
10-
// TODO: Enable the test for HOST when it supports ext::oneapi::reduce() and
11-
// barrier()
12-
139
// This test only checks that the method queue::parallel_for() accepting
1410
// reduction, can be properly translated into queue::submit + parallel_for().
1511

SYCL/Reduction/reduction_nd_conditional.cpp

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,4 @@
11
// RUN: %clangxx -fsycl -fsycl-targets=%sycl_triple %s -o %t.out
2-
// RUNx: %HOST_RUN_PLACEHOLDER %t.out
32
// RUN: %CPU_RUN_PLACEHOLDER %t.out
43
// RUN: %GPU_RUN_PLACEHOLDER %t.out
54
// RUN: %ACC_RUN_PLACEHOLDER %t.out

SYCL/Reduction/reduction_nd_dw.cpp

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,9 @@
11
// RUN: %clangxx -fsycl -fsycl-targets=%sycl_triple %s -o %t.out
2-
// RUNx: %HOST_RUN_PLACEHOLDER %t.out
32
// RUN: %CPU_RUN_PLACEHOLDER %t.out
43
// RUN: %GPU_RUN_PLACEHOLDER %t.out
54
// RUN: %ACC_RUN_PLACEHOLDER %t.out
65
//
7-
// `Group algorithms are not supported on host device.` on Nvidia.
6+
// Group algorithms are not supported on Nvidia.
87
// XFAIL: hip_nvidia
98

109
// This test performs basic checks of parallel_for(nd_range, reduction, func)

SYCL/Reduction/reduction_nd_ext_double.cpp

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,9 +9,6 @@
99

1010
// XFAIL: hip_nvidia
1111

12-
// TODO: Enable the test for HOST when it supports intel::reduce() and barrier()
13-
// RUNx: %HOST_RUN_PLACEHOLDER %t.out
14-
1512
// This test performs basic checks of parallel_for(nd_range, reduction, func)
1613
// used with 'double' type.
1714

SYCL/Reduction/reduction_nd_ext_half.cpp

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,9 +8,6 @@
88
// work group size not bigger than 1` on Nvidia.
99
// XFAIL: hip_amd || hip_nvidia
1010

11-
// TODO: Enable the test for HOST when it supports intel::reduce() and barrier()
12-
// RUNx: %HOST_RUN_PLACEHOLDER %t.out
13-
1411
// This test performs basic checks of parallel_for(nd_range, reduction, func)
1512
// used with 'half' type.
1613

SYCL/Reduction/reduction_nd_ext_type.hpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ template <typename T> int runTests(sycl::aspect ExtAspect) {
2020
queue Q;
2121
printDeviceInfo(Q);
2222
device D = Q.get_device();
23-
if (!D.is_host() && !D.has(ExtAspect)) {
23+
if (!D.has(ExtAspect)) {
2424
std::cout << "Test skipped\n";
2525
return 0;
2626
}

SYCL/Reduction/reduction_nd_lambda.cpp

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,9 @@
11
// RUN: %clangxx -fsycl -fsycl-targets=%sycl_triple %s -o %t.out
2-
// RUNx: %HOST_RUN_PLACEHOLDER %t.out
32
// RUN: %CPU_RUN_PLACEHOLDER %t.out
43
// RUN: %GPU_RUN_PLACEHOLDER %t.out
54
// RUN: %ACC_RUN_PLACEHOLDER %t.out
65
//
7-
// Inconsistently fails on HIP AMD, error message `Barrier is not supported on
8-
// the host device yet.` on HIP Nvidia.
6+
// Inconsistently fails on HIP AMD, HIP Nvidia.
97
// UNSUPPORTED: hip_amd || hip_nvidia
108

119
// This test performs basic checks of parallel_for(nd_range, reduction, lambda)

SYCL/Reduction/reduction_nd_queue_shortcut.cpp

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -3,13 +3,9 @@
33
// RUN: %ACC_RUN_PLACEHOLDER %t.out
44
// RUN: %CPU_RUN_PLACEHOLDER %t.out
55

6-
// `Group algorithms are not supported on host device.` on NVidia.
6+
// Group algorithms are not supported on NVidia.
77
// XFAIL: hip_nvidia
88

9-
// RUNx: %HOST_RUN_PLACEHOLDER %t.out
10-
// TODO: Enable the test for HOST when it supports ext::oneapi::reduce() and
11-
// barrier()
12-
139
// This test only checks that the method queue::parallel_for() accepting
1410
// reduction, can be properly translated into queue::submit + parallel_for().
1511

SYCL/Reduction/reduction_range_queue_shortcut.cpp

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -3,13 +3,9 @@
33
// RUN: %ACC_RUN_PLACEHOLDER %t.out
44
// RUN: %CPU_RUN_PLACEHOLDER %t.out
55

6-
// `Group algorithms are not supported on host device.` on NVidia.
6+
// Group algorithms are not supported on NVidia.
77
// XFAIL: hip_nvidia
88

9-
// RUNx: %HOST_RUN_PLACEHOLDER %t.out
10-
// TODO: Enable the test for HOST when it supports ext::oneapi::reduce() and
11-
// barrier()
12-
139
// This test only checks that the shortcut method queue::parallel_for()
1410
// can accept 2 or more reduction variables.
1511

0 commit comments

Comments
 (0)