Skip to content

[SYCL] Add a workaround for a Level Zero batching issue #4268

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Aug 6, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 39 additions & 0 deletions sycl/source/detail/scheduler/commands.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1094,6 +1094,25 @@ void UnMapMemObject::emitInstrumentationData() {
#endif
}

bool UnMapMemObject::producesPiEvent() const {
// TODO remove this workaround once the batching issue is addressed in Level
// Zero plugin.
// Consider the following scenario on Level Zero:
// 1. Kernel A, which uses buffer A, is submitted to queue A.
// 2. Kernel B, which uses buffer B, is submitted to queue B.
// 3. queueA.wait().
// 4. queueB.wait().
// DPCPP runtime used to treat unmap/write commands for buffer A/B as host
// dependencies (i.e. they were waited for prior to enqueueing any command
// that's dependent on them). This allowed Level Zero plugin to detect that
// each queue is idle on steps 1/2 and submit the command list right away.
// This is no longer the case since we started passing these dependencies in
// an event waitlist and Level Zero plugin attempts to batch these commands,
// so the execution of kernel B starts only on step 4. This workaround
// restores the old behavior in this case until this is resolved.
return MQueue->getPlugin().getBackend() != backend::level_zero;
}

cl_int UnMapMemObject::enqueueImp() {
waitForPreparedHostEvents();
std::vector<EventImplPtr> EventImpls = MPreparedDepsEvents;
Expand Down Expand Up @@ -1170,6 +1189,26 @@ const QueueImplPtr &MemCpyCommand::getWorkerQueue() const {
return MQueue->is_host() ? MSrcQueue : MQueue;
}

bool MemCpyCommand::producesPiEvent() const {
// TODO remove this workaround once the batching issue is addressed in Level
// Zero plugin.
// Consider the following scenario on Level Zero:
// 1. Kernel A, which uses buffer A, is submitted to queue A.
// 2. Kernel B, which uses buffer B, is submitted to queue B.
// 3. queueA.wait().
// 4. queueB.wait().
// DPCPP runtime used to treat unmap/write commands for buffer A/B as host
// dependencies (i.e. they were waited for prior to enqueueing any command
// that's dependent on them). This allowed Level Zero plugin to detect that
// each queue is idle on steps 1/2 and submit the command list right away.
// This is no longer the case since we started passing these dependencies in
// an event waitlist and Level Zero plugin attempts to batch these commands,
// so the execution of kernel B starts only on step 4. This workaround
// restores the old behavior in this case until this is resolved.
return MQueue->is_host() ||
MQueue->getPlugin().getBackend() != backend::level_zero;
}

cl_int MemCpyCommand::enqueueImp() {
waitForPreparedHostEvents();
std::vector<EventImplPtr> EventImpls = MPreparedDepsEvents;
Expand Down
2 changes: 2 additions & 0 deletions sycl/source/detail/scheduler/commands.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -441,6 +441,7 @@ class UnMapMemObject : public Command {
void printDot(std::ostream &Stream) const final;
const Requirement *getRequirement() const final { return &MDstReq; }
void emitInstrumentationData() override;
bool producesPiEvent() const final;

private:
cl_int enqueueImp() final;
Expand All @@ -463,6 +464,7 @@ class MemCpyCommand : public Command {
void emitInstrumentationData() final;
const ContextImplPtr &getWorkerContext() const final;
const QueueImplPtr &getWorkerQueue() const final;
bool producesPiEvent() const final;

private:
cl_int enqueueImp() final;
Expand Down