-
Notifications
You must be signed in to change notification settings - Fork 17
[Dialect] Add basic oneDNN Graph dialect #43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
let results = (outs OneDNNGraph_LogicalTensor:$result); | ||
|
||
let assemblyFormat = | ||
"operands attr-dict `:` functional-type(operands, results)"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why we need custom assembly format?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For consistency and visibility, we want to make assembly format explicit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This adds lowering and some other stuff like shape propagation. Could you please split it into separate PRs? Simple lowering can go together with the dialect definition.
Let's also keep it clean. Could you please remove all the commented out code?
def OneDNNGraph_LogicalTensor : TensorOf<[OneDNNGraph_DataType]>; | ||
def OneDNNGraph_FloatLogicalTensor : TensorOf<[OneDNNGraph_FloatDataType]>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we have two different types? We are not even using the second one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Second one was used to test quantization ops, remove it for now. Later @BRUCE11111 will introduce a new PR to add a better way to organize types.
namespace mlir { | ||
namespace onednn_graph { | ||
|
||
// https://github.com/onnx/onnx/blob/main/docs/Broadcasting.md |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is this reference for and how it is related?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is before we switched to ResultsBroadcastableShape
, kept it so @BRUCE11111 can migrate his PR to here. inferBroadcastShape will be deleted.
|
||
// https://github.com/onnx/onnx/blob/main/docs/Broadcasting.md | ||
template <typename ShapeRange> | ||
static LogicalResult inferBroadcastShape( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need it for the dialect definition or can add later?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
inferBroadcastShape
is necessary for onednn graph op verification, since InferTensorType
trait is used. Will be replaced by ResultsBroadcastableShape
trait next, while properly crediting the author.
PatternRewriter &rewriter) const final { | ||
auto loc = op->getLoc(); | ||
auto resultTy = dyn_cast<TensorType>(op->getResultTypes().front()); | ||
// |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// |
// using ExpOpLowering = UnaryElemwiseLowering< // | ||
// onednn_graph::ExpOp, CreateLoweredUnaryOp<math::ExpOp>>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// using ExpOpLowering = UnaryElemwiseLowering< // | |
// onednn_graph::ExpOp, CreateLoweredUnaryOp<math::ExpOp>>; |
// CHECK: tensor.empty() | ||
// CHECK: linalg.generic | ||
// CHECK: arith.addf | ||
%0 = onednn_graph.add %arg0, %arg1 : (tensor<128x256xf32>, tensor<128x256xf32>) -> tensor<128x256xf32> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These can lower to named ops.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a separate branch updating the lowering code and to suit our need after previous design discussions, this one only migrates current code in old repo. Since we are about to add lowering in a separate PR, I will squash them together in the lowering PR.
4e629e5
to
cd43162
Compare
@kurapov-peter @Devjiu Please review again. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks almost ready. I didn't look at the shape inference too closely as it's a subject for change.
let assemblyFormat = | ||
"operands attr-dict `:` functional-type(operands, results)"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we really need it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I think making assembly format explicit would be better.
def OneDNNGraph_DataType : AnyTypeOf<[ | ||
F16, | ||
BF16, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, my eye is twitching otherwise :)
def OneDNNGraph_DataType : AnyTypeOf<[ | |
F16, | |
BF16, | |
def OneDNNGraph_DataType : AnyTypeOf<[ | |
F16, | |
BF16, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
#include "mlir/Dialect/Tosa/IR/TosaOps.h" | ||
#include "mlir/Dialect/Tosa/Utils/QuantUtils.h" | ||
#include "mlir/Dialect/Tosa/Utils/ShapeUtils.h" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What are these for?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed, was used to test some functionality.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems to be missing checks?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added
Add basic oneDNN Graph dialect, ops, types. More oneDNN Graph ops and lowering will be added in following PRs.
Also renamed
OnednnGraphDialect
toOneDNNGraphDialect
as the original definition.Tracking: #13