-
Notifications
You must be signed in to change notification settings - Fork 10.5k
Extended graph_op IRGen to support calling C APIs directly. #19524
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -427,8 +427,9 @@ SILDeserializer::readSILFunctionChecked(DeclID FID, SILFunction *existingFn, | |
break; | ||
|
||
case SILStage::Lowered: | ||
llvm_unreachable("cannot deserialize into a module that has entered " | ||
"Lowered stage"); | ||
if (!declarationOnly) // SWIFT_ENABLE_TENSORFLOW | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It's not super obvious to me this is a bug. Which line crashed when you didn't change this? Is it There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yup. The crash stack trace is
I think it's a bug because the comment above says:
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think this is expected behavior since it's in IRGen, which turns the SIL stage to Have you tried creating the declaration of You could define a There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. As Richard suggests, I think this is expected behavior in IRGen. As an alternative to creating declarations explicitly, can we remember the relevant SILFunctions somewhere in the context of GraphOp and use them in getAddrOfSILFunction? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks for the pointers. I'd like to avoid explicit declarations if at all possible -- that would increase code complexity, and also maintenance overhead (when the Swift function signature of _swift_tfc_GetGlobalEagerContext is changed, we need to change the irgen compiler code accordingly as well). Also, while we only have ~5 compiler runtime entry points for now, we'll be calling ~20 TF eager C APIs (see TF repo tensorflow/tensorflow/c/eager/c_api.h), and creating explicit decls for them could involve a lot more code complexity and maintenance overhead. The referenced code for getOrCreateRetainFunction() (I also studied swift/include/swift/Runtime/RuntimeFunctions.def) is of a different scenario -- these runtime functions do not have their impls already defined as external C APIs or swift functions (via silgen names). I thought deserializing the SIL function decl (not body) in IRGen phase would still be safe. Do you have a specific concern? @bgogul, it'd be possible to cache the SILFunction* at an earlier compiler phase (before we get into IRGen), but if the cached version is still valid to used, I believe the deserialized version (just the decl) would be safe to use as well, and the latter would be somewhat simpler. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think one concern with deserializing during IRGen phase is that we don't have a chance to run some of the passes that would have run if we deserialized that function at an early stage. Even though you are only using it here for getting the declaration, it goes against separation of concerns. Perhaps others can jump in if this is not a major concern. Isn't the case that we check that TensorFlow module is imported at some point? Given that these functions are part of TensorFlow, why can't we cache them then and make sure they are valid. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is there a reason we need to codegen any TensorFlow C API from IRGen directly? Can they be moved and concentrated to one or two entry points in the runtime? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Discussed offline. I'll submit this patch, and start a discussion with core team on this "SILFunction lookup" based approach. |
||
llvm_unreachable("cannot deserialize into a module that has entered " | ||
"Lowered stage"); | ||
} | ||
|
||
if (FID == 0) | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -1166,3 +1166,20 @@ public func _GetGlobalEagerContext() -> CTFEContext { | |
return _ExecutionContext.global.eagerContext | ||
} | ||
|
||
// TODO: replace these functions with generic ones that do not hard-code Float. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think this will be more straight-forward to do when there is a non-generic superclass of TensorHandle. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ack. Will refresh this once I use Richard's patch. |
||
@inlinable | ||
@_silgen_name("_swift_tfc_ExtractFloatCTensorHandle") | ||
public func _ExtractCTensorHandle( | ||
_ handle: TensorHandle<Float> | ||
) -> CTensorHandle { | ||
return handle.cTensorHandle | ||
} | ||
|
||
@inlinable | ||
@_silgen_name("_swift_tfc_CreateFloatTensorHandleFromCTensorHandle") | ||
public func _CreateTensorHandleFromCTensorHandle( | ||
_ ownedCHandle: CTensorHandle | ||
) -> TensorHandle<Float> { | ||
return TensorHandle<Float>(owning: ownedCHandle) | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This all LGTM