Skip to content

Implement a workaround to gemm issue in OneMKL #2082

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 15 commits into from
Oct 2, 2024

Conversation

antonwolfy
Copy link
Contributor

@antonwolfy antonwolfy commented Oct 2, 2024

The PR is about to implement a workaround to issue with incorrect result from gemm function of OneMKL when running on Lunar Lake or Arrow Lake Battlemage graphics architectures.
The PR proposes to enable the w/a for small size of input arrays with non-zero offset where the issue was initially identified by tests for eig/eigh functions.

It's totally unclear for what sizes of input array the w/a needs to be applied, so it proposes to set to close values used in failing tests.

  • Have you provided a meaningful PR description?
  • Have you added a test, reproducer or referred to issue with a reproducer?
  • Have you tested your changes locally for CPU and GPU devices?
  • Have you made sure that new changes do not introduce compiler warnings?
  • Have you checked performance impact of proposed changes?
  • If this PR is a work in progress, are you filing the PR as a draft?

@antonwolfy antonwolfy self-assigned this Oct 2, 2024
Copy link
Contributor

github-actions bot commented Oct 2, 2024

View rendered docs @ https://intelpython.github.io/dpnp/index.html

@antonwolfy antonwolfy force-pushed the impl-w/a-to-gemm-on-lnl-arl branch from 7311ae3 to e59f9f7 Compare October 2, 2024 13:35
@antonwolfy antonwolfy marked this pull request as ready for review October 2, 2024 17:13
Copy link
Contributor

@oleksandr-pavlyk oleksandr-pavlyk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me!

Copy link
Collaborator

@vtavana vtavana left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you, @antonwolfy!

@antonwolfy antonwolfy merged commit 178342c into master Oct 2, 2024
45 of 46 checks passed
@antonwolfy antonwolfy deleted the impl-w/a-to-gemm-on-lnl-arl branch October 2, 2024 21:08
antonwolfy added a commit that referenced this pull request Oct 2, 2024
* Implement a workaround to gemm issue in OneMKL

* Fix codespell issue

* Enable w/a also for float dtype

* Add Battlemage G21 arhitecture to w/a

* Disable w/a for Arrow Lake

* Remove Lunar Lake architecture from the w/a

* Applied the pre-commit hooks

* Update dpnp/backend/extensions/blas/gemm.hpp

Co-authored-by: Oleksandr Pavlyk <[email protected]>

* Applied pre-commit black hook

* Add more clarification to the comment

* Remove excess semicolon

* Removed const keyword from review comment because ext_oneapi_architecture_is() isn't marked as const

* Applied review comment

* Updated the changelog

---------

Co-authored-by: Oleksandr Pavlyk <[email protected]>
@antonwolfy antonwolfy mentioned this pull request Oct 2, 2024
6 tasks
github-actions bot added a commit that referenced this pull request Oct 2, 2024
* Implement a workaround to gemm issue in OneMKL

* Fix codespell issue

* Enable w/a also for float dtype

* Add Battlemage G21 arhitecture to w/a

* Disable w/a for Arrow Lake

* Remove Lunar Lake architecture from the w/a

* Applied the pre-commit hooks

* Update dpnp/backend/extensions/blas/gemm.hpp

Co-authored-by: Oleksandr Pavlyk <[email protected]>

* Applied pre-commit black hook

* Add more clarification to the comment

* Remove excess semicolon

* Removed const keyword from review comment because ext_oneapi_architecture_is() isn't marked as const

* Applied review comment

* Updated the changelog

---------

Co-authored-by: Oleksandr Pavlyk <[email protected]> 178342c
antonwolfy added a commit that referenced this pull request Oct 2, 2024
* Implement a workaround to gemm issue in OneMKL

* Fix codespell issue

* Enable w/a also for float dtype

* Add Battlemage G21 arhitecture to w/a

* Disable w/a for Arrow Lake

* Remove Lunar Lake architecture from the w/a

* Applied the pre-commit hooks

* Update dpnp/backend/extensions/blas/gemm.hpp



* Applied pre-commit black hook

* Add more clarification to the comment

* Remove excess semicolon

* Removed const keyword from review comment because ext_oneapi_architecture_is() isn't marked as const

* Applied review comment

* Updated the changelog

---------

Co-authored-by: Oleksandr Pavlyk <[email protected]>
antonwolfy added a commit that referenced this pull request Oct 11, 2024
antonwolfy added a commit that referenced this pull request Oct 14, 2024
antonwolfy added a commit that referenced this pull request Oct 14, 2024
* Revert "Implement a workaround to gemm issue in OneMKL (#2082)"

This reverts commit 178342c.

* Add test to explicitly cover the w/a for gemm and gemm_batch

* Update test to reproduce the exact issue
antonwolfy added a commit that referenced this pull request Oct 14, 2024
* Revert "Implement a workaround to gemm issue in OneMKL (#2082)"

This reverts commit 178342c.

* Add test to explicitly cover the w/a for gemm and gemm_batch

* Update test to reproduce the exact issue
github-actions bot added a commit that referenced this pull request Oct 14, 2024
* Revert "Implement a workaround to gemm issue in OneMKL (#2082)"

This reverts commit 178342c.

* Add test to explicitly cover the w/a for gemm and gemm_batch

* Update test to reproduce the exact issue dd908f7
antonwolfy added a commit that referenced this pull request Oct 14, 2024
* Revert gh-2082 with w/a for gemm issue in OneMKL (#2101)

* Revert "Implement a workaround to gemm issue in OneMKL (#2082)"

This reverts commit 178342c.

* Add test to explicitly cover the w/a for gemm and gemm_batch

* Update test to reproduce the exact issue

* Set release date
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants