You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Reapply: "relax tolerances for all unary float ops (#9585)", "Add SupportedTensorDtypes::BOOL (#9584)", new op_mul test (#11206)
These were reverted because they were part of a stack with interenal test failures.
Original #9585 summary:
We were requiring ourselves to compute at double-precision, but ATen
actually converts non-floating-point types to `float` by default, not
`double`. Use the ATen tolerances everywhere.
Original #9584 summary: none
Original #11206 summary:
This tests a possibly-surprising result: int8(100) * int8(100) with
output type of long is 16 in ATen, even though the output type can hold 10000.
Differential Revision: [D76754823](https://our.internmc.facebook.com/intern/diff/D76754823/)
[ghstack-poisoned]
0 commit comments