Skip to content

Update a workaround to gemm issue in OneMKL #2096

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 2 commits into from

Conversation

antonwolfy
Copy link
Contributor

@antonwolfy antonwolfy commented Oct 8, 2024

The PR proposes to update the workaround implemented in #2082, based on new input from OneMKL team.
It assumes to enable the w/a only in case when an input array does not have 16 bytes alignment in the memory and to remove any check on size of input array.
Also the w/a is extended to gemm_batch functionl, since it might be impacted.

Additionally an explicit test is added to verify the w/a scenario.

  • Have you provided a meaningful PR description?
  • Have you added a test, reproducer or referred to issue with a reproducer?
  • Have you tested your changes locally for CPU and GPU devices?
  • Have you made sure that new changes do not introduce compiler warnings?
  • Have you checked performance impact of proposed changes?
  • If this PR is a work in progress, are you filing the PR as a draft?

@antonwolfy antonwolfy self-assigned this Oct 8, 2024
Copy link
Contributor

github-actions bot commented Oct 8, 2024

View rendered docs @ https://intelpython.github.io/dpnp/pull/2096/index.html

@antonwolfy
Copy link
Contributor Author

The issue has been resolved by OneMKL team, no need for a workaround anymore.

@antonwolfy antonwolfy closed this Oct 15, 2024
@antonwolfy antonwolfy deleted the impl-w/a-to-gemm-on-lnl-arl branch May 14, 2025 16:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant