Skip to content
This repository was archived by the owner on Mar 28, 2023. It is now read-only.

[SYCL] Remove host run and dependencies from SYCL/Reduction tests #1216

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 1 addition & 5 deletions SYCL/Reduction/reduction_big_data.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,10 @@
// RUN: %ACC_RUN_PLACEHOLDER %t.out
// RUN: %CPU_RUN_PLACEHOLDER %t.out
//
// `Group algorithms are not supported on host device` on Nvidia.
// Group algorithms are not supported on Nvidia.
// XFAIL: hip_nvidia
//

// RUNx: %HOST_RUN_PLACEHOLDER %t.out
// TODO: Enable the test for HOST when it supports ext::oneapi::reduce() and
// barrier()

// This test performs basic checks of parallel_for(nd_range, reduction, func)
// where the bigger data size and/or non-uniform work-group sizes may cause
// errors.
Expand Down
6 changes: 1 addition & 5 deletions SYCL/Reduction/reduction_nd_N_queue_shortcut.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,9 @@
// RUN: %ACC_RUN_PLACEHOLDER %t.out
// RUN: %CPU_RUN_PLACEHOLDER %t.out

// `Group algorithms are not supported on host device.` on NVidia.
// Group algorithms are not supported on NVidia.
// XFAIL: hip_nvidia

// RUNx: %HOST_RUN_PLACEHOLDER %t.out
// TODO: Enable the test for HOST when it supports ext::oneapi::reduce() and
// barrier()

// This test only checks that the method queue::parallel_for() accepting
// reduction, can be properly translated into queue::submit + parallel_for().

Expand Down
1 change: 0 additions & 1 deletion SYCL/Reduction/reduction_nd_conditional.cpp
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
// RUN: %clangxx -fsycl -fsycl-targets=%sycl_triple %s -o %t.out
// RUNx: %HOST_RUN_PLACEHOLDER %t.out
// RUN: %CPU_RUN_PLACEHOLDER %t.out
// RUN: %GPU_RUN_PLACEHOLDER %t.out
// RUN: %ACC_RUN_PLACEHOLDER %t.out
Expand Down
3 changes: 1 addition & 2 deletions SYCL/Reduction/reduction_nd_dw.cpp
Original file line number Diff line number Diff line change
@@ -1,10 +1,9 @@
// RUN: %clangxx -fsycl -fsycl-targets=%sycl_triple %s -o %t.out
// RUNx: %HOST_RUN_PLACEHOLDER %t.out
// RUN: %CPU_RUN_PLACEHOLDER %t.out
// RUN: %GPU_RUN_PLACEHOLDER %t.out
// RUN: %ACC_RUN_PLACEHOLDER %t.out
//
// `Group algorithms are not supported on host device.` on Nvidia.
// Group algorithms are not supported on Nvidia.
// XFAIL: hip_nvidia

// This test performs basic checks of parallel_for(nd_range, reduction, func)
Expand Down
3 changes: 0 additions & 3 deletions SYCL/Reduction/reduction_nd_ext_double.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,6 @@

// XFAIL: hip_nvidia

// TODO: Enable the test for HOST when it supports intel::reduce() and barrier()
// RUNx: %HOST_RUN_PLACEHOLDER %t.out

// This test performs basic checks of parallel_for(nd_range, reduction, func)
// used with 'double' type.

Expand Down
3 changes: 0 additions & 3 deletions SYCL/Reduction/reduction_nd_ext_half.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,6 @@
// work group size not bigger than 1` on Nvidia.
// XFAIL: hip_amd || hip_nvidia

// TODO: Enable the test for HOST when it supports intel::reduce() and barrier()
// RUNx: %HOST_RUN_PLACEHOLDER %t.out

// This test performs basic checks of parallel_for(nd_range, reduction, func)
// used with 'half' type.

Expand Down
2 changes: 1 addition & 1 deletion SYCL/Reduction/reduction_nd_ext_type.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ template <typename T> int runTests(sycl::aspect ExtAspect) {
queue Q;
printDeviceInfo(Q);
device D = Q.get_device();
if (!D.is_host() && !D.has(ExtAspect)) {
if (!D.has(ExtAspect)) {
std::cout << "Test skipped\n";
return 0;
}
Expand Down
4 changes: 1 addition & 3 deletions SYCL/Reduction/reduction_nd_lambda.cpp
Original file line number Diff line number Diff line change
@@ -1,11 +1,9 @@
// RUN: %clangxx -fsycl -fsycl-targets=%sycl_triple %s -o %t.out
// RUNx: %HOST_RUN_PLACEHOLDER %t.out
// RUN: %CPU_RUN_PLACEHOLDER %t.out
// RUN: %GPU_RUN_PLACEHOLDER %t.out
// RUN: %ACC_RUN_PLACEHOLDER %t.out
//
// Inconsistently fails on HIP AMD, error message `Barrier is not supported on
// the host device yet.` on HIP Nvidia.
// Inconsistently fails on HIP AMD, HIP Nvidia.
// UNSUPPORTED: hip_amd || hip_nvidia

// This test performs basic checks of parallel_for(nd_range, reduction, lambda)
Expand Down
6 changes: 1 addition & 5 deletions SYCL/Reduction/reduction_nd_queue_shortcut.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,9 @@
// RUN: %ACC_RUN_PLACEHOLDER %t.out
// RUN: %CPU_RUN_PLACEHOLDER %t.out

// `Group algorithms are not supported on host device.` on NVidia.
// Group algorithms are not supported on NVidia.
// XFAIL: hip_nvidia

// RUNx: %HOST_RUN_PLACEHOLDER %t.out
// TODO: Enable the test for HOST when it supports ext::oneapi::reduce() and
// barrier()

// This test only checks that the method queue::parallel_for() accepting
// reduction, can be properly translated into queue::submit + parallel_for().

Expand Down
6 changes: 1 addition & 5 deletions SYCL/Reduction/reduction_range_queue_shortcut.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,9 @@
// RUN: %ACC_RUN_PLACEHOLDER %t.out
// RUN: %CPU_RUN_PLACEHOLDER %t.out

// `Group algorithms are not supported on host device.` on NVidia.
// Group algorithms are not supported on NVidia.
// XFAIL: hip_nvidia

// RUNx: %HOST_RUN_PLACEHOLDER %t.out
// TODO: Enable the test for HOST when it supports ext::oneapi::reduce() and
// barrier()

// This test only checks that the shortcut method queue::parallel_for()
// can accept 2 or more reduction variables.

Expand Down