Skip to content

Commit 4551830

Browse files
FznamznonrdeodharJackAKirksteffenlarsen
authored andcommitted
Cherry-pick test changes related to move of bfloat16 (intel#1459)
* [SYCL] Test corrections after moving bfloat16 support out of experimental status. (intel#1129) Tests changes for intel/llvm#6524 Signed-off-by: Rajiv Deodhar <[email protected]> Co-authored-by: JackAKirk <[email protected]> * [SYCL] Correct bfloat16 namespace in ESIMD and matrix tests (intel#1422) intel/llvm#6524 moved bfloat16 out of the experimental namespace. This commit removes the last remaining uses of the experimental namespace in bfloat16 for ESIMD and matrix tests. Signed-off-by: Larsen, Steffen <[email protected]> Signed-off-by: Rajiv Deodhar <[email protected]> Signed-off-by: Larsen, Steffen <[email protected]> Co-authored-by: rdeodhar <[email protected]> Co-authored-by: JackAKirk <[email protected]> Co-authored-by: Steffen Larsen <[email protected]>
1 parent e4a3031 commit 4551830

7 files changed

+29
-26
lines changed

SYCL/BFloat16/bfloat16_builtins.cpp

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,6 @@
1313

1414
using namespace sycl;
1515
using namespace sycl::ext::oneapi;
16-
using namespace sycl::ext::oneapi::experimental;
1716

1817
constexpr int N = 60; // divisible by all tested array sizes
1918
constexpr float bf16_eps = 0.00390625;

SYCL/BFloat16/bfloat16_type.cpp

Lines changed: 6 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,12 @@
1-
// RUN: %if cuda %{%clangxx -fsycl -fsycl-targets=%sycl_triple -DUSE_CUDA_SM80=1 -Xsycl-target-backend --cuda-gpu-arch=sm_80 %s -o %t.out %}
2-
// RUN: %if cuda %{%GPU_RUN_PLACEHOLDER %t.out %}
3-
// RUN: %clangxx -fsycl -fsycl-targets=%sycl_triple %s -o %t.out
1+
// UNSUPPORTED: hip
2+
// RUN: %if cuda %{%clangxx -fsycl -fsycl-targets=%sycl_triple -Xsycl-target-backend --cuda-gpu-arch=sm_80 %s -o %t.out %}
3+
// TODO enable the below when CI supports >=sm_80
4+
// RUNx: %if cuda %{%GPU_RUN_PLACEHOLDER %t.out %}
5+
// RUN: %clangxx -fsycl %s -o %t.out
46
// TODO currently the feature isn't supported on FPGA.
57
// RUN: %CPU_RUN_PLACEHOLDER %t.out
68
// RUN: %GPU_RUN_PLACEHOLDER %t.out
79
// RUNx: %ACC_RUN_PLACEHOLDER %t.out
8-
//
9-
// Not currently supported on HIP.
10-
// UNSUPPORTED: hip
1110

1211
//==----------- bfloat16_type.cpp - SYCL bfloat16 type test ----------------==//
1312
//
@@ -19,22 +18,4 @@
1918

2019
#include "bfloat16_type.hpp"
2120

22-
int main() {
23-
24-
#ifdef USE_CUDA_SM80
25-
// Special build for SM80 CUDA.
26-
sycl::device Dev{default_selector_v};
27-
if (Dev.get_platform().get_backend() != backend::ext_oneapi_cuda) {
28-
std::cout << "Test skipped; CUDA run was not run with CUDA device."
29-
<< std::endl;
30-
return 0;
31-
}
32-
if (std::stof(Dev.get_info<sycl::info::device::backend_version>()) < 8.0f) {
33-
std::cout << "Test skipped; CUDA device does not support SM80 or newer."
34-
<< std::endl;
35-
return 0;
36-
}
37-
#endif
38-
39-
return run_tests();
40-
}
21+
int main() { return run_tests(); }

SYCL/BFloat16/bfloat16_type_cuda.cpp

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
// REQUIRES: gpu, cuda
2+
// RUN: %clangxx -fsycl -fsycl-targets=%sycl_triple -Xsycl-target-backend --cuda-gpu-arch=sm_80 %s -o %t.out
3+
// RUN: %t.out
4+
5+
//==--------- bfloat16_type_cuda.cpp - SYCL bfloat16 type test -------------==//
6+
//
7+
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
8+
// See https://llvm.org/LICENSE.txt for license information.
9+
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
10+
//
11+
//===----------------------------------------------------------------------===//
12+
13+
#include "bfloat16_type.hpp"
14+
15+
int main() { return run_tests(); }
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
SYCL/BFloat16/bfloat16_type_cuda.cpp
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
<?xml version="1.0" encoding="UTF-8" ?>
2+
<test name="bfloat16_bfloat16_type_cuda" driverID="llvm_test_suite_sycl">
3+
<description>WARNING: DON'T UPDATE THIS FILE MANUALLY!!!
4+
This config file auto-generated by suite_generator_sycl.pl.</description>
5+
</test>

llvm_test_suite_sycl.xml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -259,6 +259,7 @@ Sources repo https://github.com/intel-innersource/applications.compilers.tests.l
259259
<test configFile="config_sycl/bfloat16_bfloat16_conversions.xml" splitGroup="bfloat16" testName="bfloat16_bfloat16_conversions" />
260260
<test configFile="config_sycl/bfloat16_bfloat16_example.xml" splitGroup="bfloat16" testName="bfloat16_bfloat16_example" />
261261
<test configFile="config_sycl/bfloat16_bfloat16_type.xml" splitGroup="bfloat16" testName="bfloat16_bfloat16_type" />
262+
<test configFile="config_sycl/bfloat16_bfloat16_type_cuda.xml" splitGroup="bfloat16" testName="bfloat16_bfloat16_type_cuda" />
262263
<test configFile="config_sycl/bfloat16_bfloat_hw.xml" splitGroup="bfloat16" testName="bfloat16_bfloat_hw" />
263264
<test configFile="config_sycl/complex_sycl_complex_math_test.xml" splitGroup="complex" testName="complex_sycl_complex_math_test" />
264265
<test configFile="config_sycl/complex_sycl_complex_operator_test.xml" splitGroup="complex" testName="complex_sycl_complex_operator_test" />

llvm_test_suite_sycl_valgrind.xml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -259,6 +259,7 @@ Sources repo https://github.com/intel-innersource/applications.compilers.tests.l
259259
<test configFile="config_sycl/bfloat16_bfloat16_conversions.xml" splitGroup="bfloat16" testName="bfloat16_bfloat16_conversions" />
260260
<test configFile="config_sycl/bfloat16_bfloat16_example.xml" splitGroup="bfloat16" testName="bfloat16_bfloat16_example" />
261261
<test configFile="config_sycl/bfloat16_bfloat16_type.xml" splitGroup="bfloat16" testName="bfloat16_bfloat16_type" />
262+
<test configFile="config_sycl/bfloat16_bfloat16_type_cuda.xml" splitGroup="bfloat16" testName="bfloat16_bfloat16_type_cuda" />
262263
<test configFile="config_sycl/bfloat16_bfloat_hw.xml" splitGroup="bfloat16" testName="bfloat16_bfloat_hw" />
263264
<test configFile="config_sycl/complex_sycl_complex_math_test.xml" splitGroup="complex" testName="complex_sycl_complex_math_test" />
264265
<test configFile="config_sycl/complex_sycl_complex_operator_test.xml" splitGroup="complex" testName="complex_sycl_complex_operator_test" />

0 commit comments

Comments
 (0)