Skip to content

[AutoDiff] Fix differentiable_function-related specialization crashes. #29800

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Feb 13, 2020

Conversation

dan-zheng
Copy link
Contributor

Fix crashes related to generic specialization of partial_apply operands to
differentiable_function instructions.

differentiable_function requires derivative function operand types to match
expected derivative function types computed from the original function operand's
type, so operands cannot be specialized individually without specializing the
others.

Resolves TF-891 and TF-1126.


Fixed verification error from TF-1126:

Looking for a function: $ss4SIMDPss14DifferentiableRzSB6Scalars11SIMDStoragePRpzsAA13TangentVectorsACPRpzSBAhI_AdFRPzrlE12_vjpSubtract3lhs3rhsx5value_AJ_AJtAJc8pullbacktx_xtFZs5SIMD8VySfG_Tg5
Expected type: @convention(method) (@in_guaranteed SIMD8<Float>, @in_guaranteed SIMD8<Float>, @thick SIMD8<Float>.Type) -> (@out SIMD8<Float>, @owned @callee_guaranteed (@in_guaranteed SIMD8<Float>) -> (@out SIMD8<Float>, @out SIMD8<Float>))
Found    type: @convention(method) (SIMD8<Float>, SIMD8<Float>, @thick SIMD8<Float>.Type) -> (@out SIMD8<Float>, @owned @callee_guaranteed (@in_guaranteed SIMD8<Float>) -> (@out SIMD8<Float>, @out SIMD8<Float>))
Assertion failed: (ReInfo.getSpecializedType() == SpecializedF->getLoweredFunctionType() && "Previously specialized function does not match expected type."), function lookupSpecialization, file /Users/swiftninjas/s4tf/swift/lib/SILOptimizer/Utils/Generics.cpp, line 1827.

Fix crashes related to generic specialization of `partial_apply` operands to
`differentiable_function` instructions.

`differentiable_function` requires derivative function operand types to match
expected derivative function types computed from the original function operand's
type, so operands cannot be specialized individually without specializing the
others.

Resolves TF-891 and TF-1126.
@dan-zheng dan-zheng added the tensorflow This is for "tensorflow" branch PRs. label Feb 12, 2020
@dan-zheng dan-zheng requested review from rxwei and marcrasi February 12, 2020 23:29
@dan-zheng
Copy link
Contributor Author

@swift-ci Please test tensorflow

Copy link

@marcrasi marcrasi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ooo nice.

Seems like we'll want to allow specialization of these things eventually because specialization can make things much faster?

Copy link
Contributor

@rxwei rxwei left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think eventually we'll want specialization to apply to the @differentiable function, not operands to the differentiable_function instruction.

@dan-zheng
Copy link
Contributor Author

Seems like we'll want to allow specialization of these things eventually because specialization can make things much faster?

Yes, allowing specialization if possible would be ideal!

A nice approach might be to relax differentiable_function/differentiable_function_extract verification while preserving global derivative function type consistency, à la #27923.

This avoids requiring that all function-type-rewriting transformations (e.g. generic specialization, loadable-by-address) must update original/derivative function types consistently.

@dan-zheng dan-zheng merged commit 5f62183 into tensorflow Feb 13, 2020
@dan-zheng dan-zheng deleted the autodiff-generic-specialization-crash-fix branch February 13, 2020 02:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
tensorflow This is for "tensorflow" branch PRs.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants