Skip to content

[mlir][ArmSME] Update docs #74527

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Dec 6, 2023
Merged

Conversation

banach-space
Copy link
Contributor

No description provided.

@llvmbot llvmbot added the mlir label Dec 5, 2023
@llvmbot
Copy link
Member

llvmbot commented Dec 5, 2023

@llvm/pr-subscribers-mlir

Author: Andrzej Warzyński (banach-space)

Changes

Full diff: https://github.com/llvm/llvm-project/pull/74527.diff

1 Files Affected:

  • (modified) mlir/docs/Dialects/ArmSME.md (+21-6)
diff --git a/mlir/docs/Dialects/ArmSME.md b/mlir/docs/Dialects/ArmSME.md
index 505b52938eacc..4af1594c3933c 100644
--- a/mlir/docs/Dialects/ArmSME.md
+++ b/mlir/docs/Dialects/ArmSME.md
@@ -1,13 +1,28 @@
 # 'ArmSME' Dialect
 
-[TOC]
+Basic dialect to target Arm SME.
+
+This dialect defines custom and LLVM IR intrinsic operations that are used to
+target Arm Scalable Matrix Extension. Through the available lowerings one can,
+for example, lower a [linalg.matmul](https://mlir.llvm.org/docs/Dialects/Linalg/#linalgmatmul-linalgmatmulop)
+opereation to Arm SME
+[FMOPA](https://developer.arm.com/documentation/ddi0602/2023-03/SME-Instructions/FMOPA--widening---Half-precision-floating-point-sum-of-outer-products-and-accumulate-)
+(floating point outer product) operations. See one of the in-tree end-to-end
+integration tests for reference:
+
+* [Linalg/CPU/ArmSME/matmul.mlir](https://github.com/llvm/llvm-project/blob/main/mlir/test/Integration/Dialect/Linalg/CPU/ArmSME/matmul.mlir)
+* [Vector/CPU/ArmSME/test-outerproduct-f64.mlir](https://github.com/llvm/llvm-project/blob/main/mlir/test/Integration/Dialect/Vector/CPU/ArmSME/test-outerproduct-f64.mlir)
 
-Basic dialect to target Arm SME architectures This dialect contains the
-definitions necessary to target Arm SME scalable matrix operations.
+These tests are run "post-commit" by the
+[clang-aarch64-sve-vla](https://lab.llvm.org/buildbot/#/builders/197) LLVM
+BuildBot worker.
 
-## References
-* https://developer.arm.com/documentation/ddi0616
-* https://developer.arm.com/documentation/ddi0602/2023-03/SME-Instructions
+**References:**
+
+* [The Scalable Matrix Extension (SME), for Armv9-A](https://developer.arm.com/documentation/ddi0616)
+* [A64 -- SME Instructions (alphabetic order)](https://developer.arm.com/documentation/ddi0602/2023-03/SME-Instructions)
+
+[TOC]
 
 ## Operations
 

Copy link
Member

@MacDue MacDue left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Commit/PR tile nit: ArmSME not just sme ([mlir][ArmSME] Update docs)

LGTM, just a few nits:

for example, lower a [linalg.matmul](https://mlir.llvm.org/docs/Dialects/Linalg/#linalgmatmul-linalgmatmulop)
opereation to Arm SME
[FMOPA](https://developer.arm.com/documentation/ddi0602/2023-03/SME-Instructions/FMOPA--widening---Half-precision-floating-point-sum-of-outer-products-and-accumulate-)
(floating point outer product) operations. See one of the in-tree end-to-end
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit:

Suggested change
(floating point outer product) operations. See one of the in-tree end-to-end
(floating-point outer product) operations. See one of the in-tree end-to-end

@banach-space banach-space changed the title [mlir][sme] Update docs [mlir][ArmSME] Update docs Dec 6, 2023
Address comments from Ben
@banach-space banach-space merged commit fb62a18 into llvm:main Dec 6, 2023
@banach-space banach-space deleted the andrzej/update_docs branch March 16, 2024 19:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants