Skip to content

Use FloatingPoint constraint insteaad of BinaryFloatingPoint. #11

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jan 7, 2019

Conversation

dan-zheng
Copy link
Member

A follow-up to swiftlang/swift#21673.

Regenerated RawOpsGenerated.swift using TensorFlow version from
swift/utils/update_checkout/update-checkout-config.json:
tensorflow/tensorflow@c7c0a76

…int`.

A follow-up to swiftlang/swift#21673.

Regenerated `RawOpsGenerated.swift` using TensorFlow version from
`swift/utils/update_checkout/update-checkout-config.json`:
http://github.com/tensorflow/tensorflow/tree/c7c0a76f1d6b8ac2057434fbf638b77993c6b88e
@rxwei rxwei merged commit 6389cbc into tensorflow:master Jan 7, 2019
dan-zheng added a commit to dan-zheng/swift that referenced this pull request Jan 9, 2019
- Use `FloatingPoint` rather than `BinaryFloatingPoint` to constrain
  differentiability.
  - Follows from:
    - swiftlang#21673
    - tensorflow/swift-bindings#11
- Use `@differentiable` where clauses to constrain differentiability
  of numeric operations.
  - The most common constraint is `where Scalar : FloatingPoint` because
    `Tensor` conditionally conforms to `Differentiable where Scalar : FloatingPoint`.

Todos:
- Make more `Tensor` operations differentiable.
  - This includes reduction and broadcasting ops.
  - This is enabled by `@differentiable` where clause type-checking.
- Use VJP functions instead of adjoint functions.
  - I would prefer that this be done in a separate patch, after this patch
    adds the correct `@differentiable` where clauses.
- Add tests for newly `@differentiable` `Tensor` operations.
dan-zheng added a commit to swiftlang/swift that referenced this pull request Jan 9, 2019
* [AutoDiff] [API] Revamp `@differentiable` usages in stdlib.

- Use `FloatingPoint` rather than `BinaryFloatingPoint` to constrain
  differentiability.
  - Follows from:
    - #21673
    - tensorflow/swift-bindings#11
- Use `@differentiable` where clauses to constrain differentiability
  of numeric operations.
  - The most common constraint is `where Scalar : FloatingPoint` because
    `Tensor` conditionally conforms to `Differentiable where Scalar : FloatingPoint`.
- `Tensor` now conditionally conforms to `Differentiable` where
  `Scalar : Differentiable & FloatingPoint`.
- Allow `@differentiable` where clause conformance requirements to protocol
  composition types.

Todos:
- Make more `Tensor` operations differentiable.
  - This includes reduction and broadcasting ops.
  - This is enabled by `@differentiable` where clause type-checking.
- Use VJP functions instead of adjoint functions.
  - I would prefer that this be done in a separate patch, after this patch
    adds the correct `@differentiable` where clauses.
- Add tests for newly `@differentiable` `Tensor` operations.

* [AutoDiff] Make VJP applications use the correct substitution map.

If a custom `@differentiable` attribute defines a VJP and where clause
requirements, VJP applications should use a substitution map involving
those requirements.

Note: more related cases need to be handled, such as `@differentiable`
attributes with where clause requirements but no VJP. These cases will
be handled later.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants