Skip to content

Merge master into gold/2021 #1240

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 13 commits into from
Nov 21, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/workflows/build-sphinx.yml
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ jobs:

# https://github.com/marketplace/actions/setup-miniconda
- name: Setup miniconda
uses: conda-incubator/setup-miniconda@v2.1.1
uses: conda-incubator/setup-miniconda@v2.2.0
with:
auto-update-conda: true
python-version: ${{ env.python-ver }}
Expand Down Expand Up @@ -106,7 +106,7 @@ jobs:
if: |
!github.event.pull_request.head.repo.fork &&
(github.ref == 'refs/heads/master' || (startsWith(github.ref, 'refs/heads/release') == true) || github.event_name == 'push' && contains(github.ref, 'refs/tags/'))
uses: peaceiris/actions-gh-pages@v3.8.0
uses: peaceiris/actions-gh-pages@v3.9.0
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: doc/_build/html/
41 changes: 25 additions & 16 deletions .github/workflows/conda-package.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,15 @@ env:
PACKAGE_NAME: dpnp
MODULE_NAME: dpnp
CHANNELS: '-c dppy/label/dev -c intel -c defaults --override-channels'
TEST_SCOPE: >-
test_arraycreation.py
test_dparray.py
test_fft.py
test_linalg.py
test_mathematical.py
test_random_state.py
test_special.py
test_usm_type.py
VER_JSON_NAME: 'version.json'
VER_SCRIPT1: "import json; f = open('version.json', 'r'); j = json.load(f); f.close(); "
VER_SCRIPT2: "d = j['dpnp'][0]; print('='.join((d[s] for s in ('version', 'build'))))"
Expand Down Expand Up @@ -41,7 +50,7 @@ jobs:
fetch-depth: 0

- name: Setup miniconda
uses: conda-incubator/setup-miniconda@v2.1.1
uses: conda-incubator/setup-miniconda@v2.2.0
with:
auto-update-conda: true
python-version: ${{ matrix.python }}
Expand Down Expand Up @@ -71,7 +80,7 @@ jobs:
run: conda build --no-test --python ${{ matrix.python }} ${{ env.CHANNELS }} conda-recipe

- name: Upload artifact
uses: actions/[email protected].0
uses: actions/[email protected].1
with:
name: ${{ env.PACKAGE_NAME }} ${{ runner.os }} Python ${{ matrix.python }}
path: ${{ env.CONDA_BLD }}${{ env.PACKAGE_NAME }}-*.tar.bz2
Expand Down Expand Up @@ -102,7 +111,7 @@ jobs:
fetch-depth: 0

- name: Setup miniconda
uses: conda-incubator/setup-miniconda@v2.1.1
uses: conda-incubator/setup-miniconda@v2.2.0
with:
auto-update-conda: true
python-version: ${{ matrix.python }}
Expand Down Expand Up @@ -134,7 +143,7 @@ jobs:
run: conda build --no-test --python ${{ matrix.python }} ${{ env.CHANNELS }} conda-recipe

- name: Upload artifact
uses: actions/[email protected].0
uses: actions/[email protected].1
with:
name: ${{ env.PACKAGE_NAME }} ${{ runner.os }} Python ${{ matrix.python }}
path: ${{ env.CONDA_BLD }}${{ env.PACKAGE_NAME }}-*.tar.bz2
Expand Down Expand Up @@ -165,7 +174,7 @@ jobs:

steps:
- name: Download artifact
uses: actions/[email protected].0
uses: actions/[email protected].1
with:
name: ${{ env.PACKAGE_NAME }} ${{ runner.os }} Python ${{ matrix.python }}
path: ${{ env.pkg-path-in-channel }}
Expand All @@ -176,7 +185,7 @@ jobs:
tar -xvf ${{ env.pkg-path-in-channel }}/${{ env.PACKAGE_NAME }}-*.tar.bz2 -C ${{ env.extracted-pkg-path }}

- name: Setup miniconda
uses: conda-incubator/setup-miniconda@v2.1.1
uses: conda-incubator/setup-miniconda@v2.2.0
with:
auto-update-conda: true
python-version: ${{ matrix.python }}
Expand Down Expand Up @@ -235,7 +244,7 @@ jobs:
# TODO: run the whole scope once the issues on CPU are resolved
- name: Run tests
run: |
python -m pytest -q -ra --disable-warnings -vv test_arraycreation.py test_dparray.py test_fft.py test_linalg.py test_mathematical.py test_random_state.py test_special.py
python -m pytest -q -ra --disable-warnings -vv ${{ env.TEST_SCOPE }}
env:
OCL_ICD_FILENAMES: 'libintelocl.so'
working-directory: ${{ env.tests-path }}
Expand Down Expand Up @@ -267,7 +276,7 @@ jobs:

steps:
- name: Download artifact
uses: actions/[email protected].0
uses: actions/[email protected].1
with:
name: ${{ env.PACKAGE_NAME }} ${{ runner.os }} Python ${{ matrix.python }}
path: ${{ env.pkg-path-in-channel }}
Expand All @@ -287,7 +296,7 @@ jobs:
dir ${{ env.extracted-pkg-path }}

- name: Setup miniconda
uses: conda-incubator/setup-miniconda@v2.1.1
uses: conda-incubator/setup-miniconda@v2.2.0
with:
auto-update-conda: true
python-version: ${{ matrix.python }}
Expand Down Expand Up @@ -410,14 +419,14 @@ jobs:
# TODO: run the whole scope once the issues on CPU are resolved
- name: Run tests
run: |
python -m pytest -q -ra --disable-warnings -vv test_arraycreation.py test_dparray.py test_fft.py test_linalg.py test_mathematical.py test_random_state.py test_special.py
python -m pytest -q -ra --disable-warnings -vv ${{ env.TEST_SCOPE }}
working-directory: ${{ env.tests-path }}

upload_linux:
needs: test_linux

if: |
!github.event.pull_request.head.repo.fork &&
(github.repository == 'IntelPython/dpnp') &&
(github.ref == 'refs/heads/master' || (startsWith(github.ref, 'refs/heads/release') == true) || github.event_name == 'push' && contains(github.ref, 'refs/tags/'))

runs-on: ubuntu-latest
Expand All @@ -432,12 +441,12 @@ jobs:

steps:
- name: Download artifact
uses: actions/[email protected].0
uses: actions/[email protected].1
with:
name: ${{ env.PACKAGE_NAME }} ${{ runner.os }} Python ${{ matrix.python }}

- name: Setup miniconda
uses: conda-incubator/setup-miniconda@v2.1.1
uses: conda-incubator/setup-miniconda@v2.2.0
with:
auto-update-conda: true
python-version: ${{ matrix.python }}
Expand All @@ -456,7 +465,7 @@ jobs:
needs: test_windows

if: |
!github.event.pull_request.head.repo.fork && !github.event.push.repository.fork &&
(github.repository == 'IntelPython/dpnp') &&
(github.ref == 'refs/heads/master' || (startsWith(github.ref, 'refs/heads/release') == true) || github.event_name == 'push' && contains(github.ref, 'refs/tags/'))

runs-on: windows-latest
Expand All @@ -470,12 +479,12 @@ jobs:
python: ['3.8', '3.9']
steps:
- name: Download artifact
uses: actions/[email protected].0
uses: actions/[email protected].1
with:
name: ${{ env.PACKAGE_NAME }} ${{ runner.os }} Python ${{ matrix.python }}

- name: Setup miniconda
uses: conda-incubator/setup-miniconda@v2.1.1
uses: conda-incubator/setup-miniconda@v2.2.0
with:
auto-update-conda: true
python-version: ${{ matrix.python }}
Expand Down
52 changes: 36 additions & 16 deletions dpnp/backend/kernels/dpnp_krnl_random.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -973,33 +973,43 @@ DPCTLSyclEventRef dpnp_rng_multinomial_c(DPCTLSyclQueueRef q_ref,
else
{
DPNPC_ptr_adapter<double> p_ptr(q_ref, p_in, p_size, true);
const double* p = p_ptr.get_ptr();
std::vector<double> p_vec(p, p + p_size);
double* p_data = p_ptr.get_ptr();

// size = size
// `result` is a array for random numbers
// `size` is a `result`'s len. `size = n * p.size()`
// `size` is a `result`'s len. `size = n * p_size`
// `n` is a number of random values to be generated.
size_t n = size / p_vec.size();
size_t n = size / p_size;

size_t is_cpu_queue = dpnp_queue_is_cpu_c();

// math library supports the distribution generation on GPU device with input parameters
// which follow the condition
if (is_cpu_queue || (!is_cpu_queue && (p_size >= ((size_t)ntrial * 16)) && (ntrial <= 16)))
{
DPNPC_ptr_adapter<std::int32_t> result_ptr(q_ref, result, size, false, true);
std::int32_t* result1 = result_ptr.get_ptr();
mkl_rng::multinomial<std::int32_t> distribution(ntrial, p_vec);
DPNPC_ptr_adapter<_DataType> result_ptr(q_ref, result, size, true, true);
_DataType* result1 = result_ptr.get_ptr();

#if (INTEL_MKL_VERSION < __INTEL_MKL_2023_SWITCHOVER)
std::vector<double> p(p_data, p_data + p_size);
#else
auto p = sycl::span<double>{p_data, p_size};
#endif
mkl_rng::multinomial<_DataType> distribution(ntrial, p);

// perform generation
event_out = mkl_rng::generate(distribution, DPNP_RNG_ENGINE, n, result1);
event_ref = reinterpret_cast<DPCTLSyclEventRef>(&event_out);

p_ptr.depends_on(event_out);
result_ptr.depends_on(event_out);
}
else
{
DPNPC_ptr_adapter<std::int32_t> result_ptr(q_ref, result, size, true, true);
std::int32_t* result1 = result_ptr.get_ptr();
DPNPC_ptr_adapter<_DataType> result_ptr(q_ref, result, size, true, true);
_DataType* result1 = result_ptr.get_ptr();
int errcode = viRngMultinomial(
VSL_RNG_METHOD_MULTINOMIAL_MULTPOISSON, get_rng_stream(), n, result1, ntrial, p_size, p);
VSL_RNG_METHOD_MULTINOMIAL_MULTPOISSON, get_rng_stream(), n, result1, ntrial, p_size, p_data);
if (errcode != VSL_STATUS_OK)
{
throw std::runtime_error("DPNP RNG Error: dpnp_rng_multinomial_c() failed.");
Expand All @@ -1023,6 +1033,7 @@ void dpnp_rng_multinomial_c(
size,
dep_event_vec_ref);
DPCTLEvent_WaitAndThrow(event_ref);
DPCTLEvent_Delete(event_ref);
}

template <typename _DataType>
Expand Down Expand Up @@ -1065,24 +1076,32 @@ DPCTLSyclEventRef dpnp_rng_multivariate_normal_c(DPCTLSyclQueueRef q_ref,
sycl::queue q = *(reinterpret_cast<sycl::queue*>(q_ref));

DPNPC_ptr_adapter<double> mean_ptr(q_ref, mean_in, mean_size, true);
const double* mean = mean_ptr.get_ptr();
double* mean_data = mean_ptr.get_ptr();
DPNPC_ptr_adapter<double> cov_ptr(q_ref, cov_in, cov_size, true);
const double* cov = cov_ptr.get_ptr();
double* cov_data = cov_ptr.get_ptr();

_DataType* result1 = reinterpret_cast<_DataType*>(result);
_DataType* result1 = static_cast<_DataType *>(result);

std::vector<double> mean_vec(mean, mean + mean_size);
std::vector<double> cov_vec(cov, cov + cov_size);
#if (INTEL_MKL_VERSION < __INTEL_MKL_2023_SWITCHOVER)
std::vector<double> mean(mean_data, mean_data + mean_size);
std::vector<double> cov(cov_data, cov_data + cov_size);
#else
auto mean = sycl::span<double>{mean_data, mean_size};
auto cov = sycl::span<double>{cov_data, cov_size};
#endif

// `result` is a array for random numbers
// `size` is a `result`'s len.
// `size1` is a number of random values to be generated for each dimension.
size_t size1 = size / dimen;

mkl_rng::gaussian_mv<_DataType> distribution(dimen, mean_vec, cov_vec);
mkl_rng::gaussian_mv<_DataType> distribution(dimen, mean, cov);
auto event_out = mkl_rng::generate(distribution, DPNP_RNG_ENGINE, size1, result1);
event_ref = reinterpret_cast<DPCTLSyclEventRef>(&event_out);

mean_ptr.depends_on(event_out);
cov_ptr.depends_on(event_out);

return DPCTLEvent_Copy(event_ref);
}

Expand All @@ -1107,6 +1126,7 @@ void dpnp_rng_multivariate_normal_c(void* result,
size,
dep_event_vec_ref);
DPCTLEvent_WaitAndThrow(event_ref);
DPCTLEvent_Delete(event_ref);
}

template <typename _DataType>
Expand Down
7 changes: 7 additions & 0 deletions dpnp/backend/src/dpnp_utils.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,13 @@
#define __SYCL_COMPILER_2023_SWITCHOVER 20221102L
#endif

/**
* Version of Intel MKL at which transition to OneMKL release 2023.0.0 occurs.
*/
#ifndef __INTEL_MKL_2023_SWITCHOVER
#define __INTEL_MKL_2023_SWITCHOVER 20230000
#endif

/**
* @defgroup BACKEND_UTILS Backend C++ library utilities
* @{
Expand Down
17 changes: 1 addition & 16 deletions dpnp/dpnp_utils/dpnp_algo_utils.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -532,22 +532,7 @@ cdef tuple get_common_usm_allocation(dpnp_descriptor x1, dpnp_descriptor x2):
array1_obj = x1.get_array()
array2_obj = x2.get_array()

def get_usm_type(usm_types):
if not isinstance(usm_types, (list, tuple)):
raise TypeError(
"Expected a list or a tuple, got {}".format(type(usm_types))
)
if len(usm_types) == 0:
return None
elif len(usm_types) == 1:
return usm_types[0]
for usm_type1, usm_type2 in zip(usm_types, usm_types[1:]):
if usm_type1 != usm_type2:
return None
return usm_types[0]

# TODO: use similar function from dpctl.utils instead of get_usm_type
common_usm_type = get_usm_type((array1_obj.usm_type, array2_obj.usm_type))
common_usm_type = dpctl.utils.get_coerced_usm_type((array1_obj.usm_type, array2_obj.usm_type))
if common_usm_type is None:
raise ValueError(
"could not recognize common USM type for inputs of USM types {} and {}"
Expand Down
3 changes: 3 additions & 0 deletions tests/skipped_tests.tbl
Original file line number Diff line number Diff line change
Expand Up @@ -199,9 +199,12 @@ tests/test_linalg.py::test_svd[(5,3)-float64]
tests/test_linalg.py::test_svd[(16,16)-float64]

tests/test_mathematical.py::TestGradient::test_gradient_y1_dx[3.5-array1]

tests/test_random.py::TestPermutationsTestShuffle::test_shuffle1[lambda x: dpnp.asarray([[i, i] for i in x])]
tests/test_random.py::TestPermutationsTestShuffle::test_shuffle1[lambda x: (dpnp.asarray([(i, i) for i in x], [("a", int), ("b", int)]).view(dpnp.recarray))]
tests/test_random.py::TestPermutationsTestShuffle::test_shuffle1[lambda x: dpnp.asarray([(i, i) for i in x], [("a", object), ("b", dpnp.int32)])]]
tests/test_random.py::TestPermutationsTestShuffle::test_shuffle1[lambda x: dpnp.asarray(x).astype(dpnp.int8)]

tests/test_sort.py::test_partition[[[1, 0], [3, 0]]-float32-1]
tests/third_party/cupy/binary_tests/test_elementwise.py::TestElementwise::test_bitwise_and
tests/third_party/cupy/binary_tests/test_elementwise.py::TestElementwise::test_bitwise_or
Expand Down
6 changes: 1 addition & 5 deletions tests/skipped_tests_gpu.tbl
Original file line number Diff line number Diff line change
@@ -1,5 +1,3 @@
tests/test_random.py::TestDistributionsMultinomial::test_extreme_value
tests/test_random.py::TestDistributionsMultinomial::test_seed1
tests/test_random.py::TestDistributionsMultivariateNormal::test_moments
tests/test_random.py::TestDistributionsMultivariateNormal::test_output_shape_check
tests/test_random.py::TestDistributionsMultivariateNormal::test_seed
Expand Down Expand Up @@ -219,9 +217,7 @@ tests/test_mathematical.py::TestGradient::test_gradient_y1[array2]
tests/test_mathematical.py::TestGradient::test_gradient_y1_dx[2-array0]
tests/test_mathematical.py::TestGradient::test_gradient_y1_dx[2-array1]
tests/test_mathematical.py::TestGradient::test_gradient_y1_dx[2-array2]
tests/test_random.py::TestDistributionsMultinomial::test_check_sum
tests/test_random.py::TestDistributionsMultinomial::test_moments
tests/test_random.py::TestDistributionsMultinomial::test_seed

tests/test_random.py::TestPermutationsTestShuffle::test_shuffle1[lambda x: dpnp.astype(dpnp.asarray(x), dpnp.int8)]
tests/test_random.py::TestPermutationsTestShuffle::test_shuffle1[lambda x: dpnp.astype(dpnp.asarray(x), object)]
tests/test_random.py::TestPermutationsTestShuffle::test_shuffle1[lambda x: dpnp.vstack([x, x]).T]
Expand Down
36 changes: 36 additions & 0 deletions tests/test_usm_type.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
import pytest

import dpnp as dp

import dpctl.utils as du

list_of_usm_types = [
"device",
"shared",
"host"
]


@pytest.mark.parametrize("usm_type", list_of_usm_types, ids=list_of_usm_types)
def test_coerced_usm_types_sum(usm_type):
x = dp.arange(10, usm_type = "device")
y = dp.arange(10, usm_type = usm_type)

z = x + y

assert z.usm_type == x.usm_type
assert z.usm_type == "device"
assert y.usm_type == usm_type


@pytest.mark.parametrize("usm_type_x", list_of_usm_types, ids=list_of_usm_types)
@pytest.mark.parametrize("usm_type_y", list_of_usm_types, ids=list_of_usm_types)
def test_coerced_usm_types_mul(usm_type_x, usm_type_y):
x = dp.arange(10, usm_type = usm_type_x)
y = dp.arange(10, usm_type = usm_type_y)

z = x * y

assert x.usm_type == usm_type_x
assert y.usm_type == usm_type_y
assert z.usm_type == du.get_coerced_usm_type([usm_type_x, usm_type_y])