-
Notifications
You must be signed in to change notification settings - Fork 787
[SYCL] Support lambda functions passed to reduction #2190
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
bader
merged 5 commits into
intel:sycl
from
v-klochkov:public_vklochkov_reduction_lambda
Jul 29, 2020
Merged
Changes from 1 commit
Commits
Show all changes
5 commits
Select commit
Hold shift + click to select a range
6e0b92e
[SYCL] Support lambda functions passed to reduction
v-klochkov 068f6a9
Additional fix in reduction LIT test to be in sync with patch enablin…
v-klochkov 22d167f
Address reviewer's comment: add more details to LIT test comment section
v-klochkov 4a271aa
Fix clang-format issues in LIT test
v-klochkov abe69a9
One more fix for clang-format
v-klochkov File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,71 @@ | ||
// UNSUPPORTED: cuda | ||
// OpenCL C 2.x alike work-group functions not yet supported by CUDA. | ||
// | ||
// RUN: %clangxx -fsycl -fsycl-targets=%sycl_triple %s -o %t.out | ||
// RUNx: env SYCL_DEVICE_TYPE=HOST %t.out | ||
// RUN: %CPU_RUN_PLACEHOLDER %t.out | ||
// RUN: %GPU_RUN_PLACEHOLDER %t.out | ||
// RUN: %ACC_RUN_PLACEHOLDER %t.out | ||
|
||
// This test performs basic checks of parallel_for(nd_range, reduction, lambda) | ||
|
||
#include "reduction_utils.hpp" | ||
#include <CL/sycl.hpp> | ||
#include <cassert> | ||
|
||
using namespace cl::sycl; | ||
|
||
template <class KernelName, typename T, class BinaryOperation> | ||
void test(T Identity, BinaryOperation BOp, size_t WGSize, size_t NWItems) { | ||
buffer<T, 1> InBuf(NWItems); | ||
buffer<T, 1> OutBuf(1); | ||
|
||
// Initialize. | ||
T CorrectOut; | ||
initInputData(InBuf, CorrectOut, Identity, BOp, NWItems); | ||
|
||
// Compute. | ||
queue Q; | ||
Q.submit([&](handler &CGH) { | ||
auto In = InBuf.template get_access<access::mode::read>(CGH); | ||
auto Out = OutBuf.template get_access<access::mode::discard_write>(CGH); | ||
auto Redu = intel::reduction(Out, Identity, BOp); | ||
|
||
range<1> GlobalRange(NWItems); | ||
range<1> LocalRange(WGSize); | ||
nd_range<1> NDRange(GlobalRange, LocalRange); | ||
CGH.parallel_for<KernelName>(NDRange, Redu, | ||
[=](nd_item<1> NDIt, auto &Sum) { | ||
Sum.combine(In[NDIt.get_global_linear_id()]); | ||
}); | ||
}); | ||
|
||
// Check correctness. | ||
auto Out = OutBuf.template get_access<access::mode::read>(); | ||
T ComputedOut = *(Out.get_pointer()); | ||
if (ComputedOut != CorrectOut) { | ||
std::cout << "NWItems = " << NWItems << ", WGSize = " << WGSize << "\n"; | ||
std::cout << "Computed value: " << ComputedOut | ||
<< ", Expected value: " << CorrectOut << "\n"; | ||
assert(0 && "Wrong value."); | ||
} | ||
} | ||
|
||
int main() { | ||
test<class AddTestName, int>( | ||
0, [](auto x, auto y) { return (x + y); }, 8, 32); | ||
test<class MulTestName, int>( | ||
0, [](auto x, auto y) { return (x * y); }, 8, 32); | ||
|
||
// Check with CUSTOM type. | ||
test<class CustomAddTestname, CustomVec<long long>>( | ||
CustomVec<long long>(0), | ||
[](auto x, auto y) { | ||
CustomVecPlus<long long> BOp; | ||
return BOp(x, y); | ||
}, | ||
4, 64); | ||
|
||
std::cout << "Test passed\n"; | ||
return 0; | ||
} |
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to see a bit more clarification here. Are block reductions missing from CUDA itself or CUDA plugin does not expose them?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Alex, I updated the comment by copying it from here: https://github.com/intel/llvm/blame/sycl/sycl/test/reduction/reduction_nd_s0_dw.cpp#L2 and adding a note about intel::reduce() that is not supported yet by CUDA.
I don't have any other details regarding CUDA support for that feature.
Thank you!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's implementable, but not in the plugin yet. CUDA doesn't have native support for equivalents to everything in the GroupAlgorithms extension, but it did recently add support for reductions as part of its cooperative group functionality.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the link. The updated comment is explanatory enough, I think:
// Reductions use work-group builtins (e.g. intel::reduce()) not yet supported
// by CUDA.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I completely agree. I was just providing some more background for @alexbatashev.