-
Notifications
You must be signed in to change notification settings - Fork 10.5k
fix Piper test failures #17135
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix Piper test failures #17135
Conversation
@swift-ci please test tensorflow |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Marc for looking into the test fixes.
I agree it should be a high priority to get those tests to run under OSS, so that we can minimize the efforts involved in merging code from github. I can help with the work if needed.
@@ -1,8 +1,5 @@ | |||
// RUN: %target-swift-frontend -Xllvm -tf-dump-intermediates -O -emit-sil %s -verify | |||
|
|||
// FIXME(b/78371828): Should this test work with optimized_stdlib? | |||
// UNSUPPORTED: optimized_stdlib |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will removing this fail the test on OSS side?
same for retain_release.swift below (I'm trying to re-enable that test in the other googleprivate branch of Marc's)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test used to fail on OSS because TensorFlow was optimized. Now that it's not optimized, this test passes on OSS. (Same for the other tests).
@@ -65,8 +65,8 @@ public func testScalar(f: Float) { // expected-warning {{'f' implicitly copied t | |||
// CHECK-NEXT: %3 = integer_literal $Builtin.Int32, 1 | |||
// CHECK: [[CONST:%.*]] = builtin "__tfop_Const,dtype$dtype,value$tensor,device"(%3 : $Builtin.Int32, %2 : $Builtin.FPIEEE32, {{.*}}) : $TensorHandle<Builtin.FPIEEE32> | |||
// CHECK-NEXT: [[CAST:%.*]] = unchecked_ref_cast [[CONST]] : $TensorHandle<Builtin.FPIEEE32> to $TensorHandle<Float> | |||
// CHECK: [[ADD1:%.*]] = builtin "__tfop_Add,$in,$in,device"(%1 : $TensorHandle<Float>, [[CAST]] : $TensorHandle<Float>, {{.*}}) : $TensorHandle<Float> | |||
// CHECK: [[ADD2:%.*]] = builtin "__tfop_Add,$in,$in,device"([[ADD1]] : $TensorHandle<Float>, [[ADD1:%.*]] : $TensorHandle<Float>, {{.*}}) : $TensorHandle<Float> | |||
// CHECK: [[ADD1:%.*]] = builtin "__tfop_Add,$in,$in,T,device"(%1 : $TensorHandle<Float>, [[CAST]] : $TensorHandle<Float>, {{.*}}) : $TensorHandle<Float> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the addition of the "T" attr from raw ops work by @LaurentMazare? What is the value of having such explicit attr (if none, it'd be nice to keep SIL simple as possible)? From https://github.com/tensorflow/tensorflow/blob/5fa7b03a255d3c0d05aa48e7604a94185ef6b9e2/tensorflow/core/ops/math_ops.cc#L302, it seems T can be inferred from the operands x and y.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this test change is not introduced by this PR, pls feel free to not get blocked, but we'll want to follow up with Laurent on this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mhong, that's a good point, in the generated wrappers, we always specify this "T" attr explicitly as I don't think there is an easy way to tell from the ops descriptions whether it could be inferred or not.
We could have some heuristics about removing all the "T" attributes, not sure how robust this would be though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Laurent for the context. It seems the info in TF REGISTER_OP (cc files) and ops.pbtxt is not exactly the same, and the latter is the superset of the former? e.g. attr "T" for op "Add" is only present in ops.pbtxt.
Given your work is based on ops.pbtxt, I agree that respecting the info there (and not having additional info/heuristics to override/remove stuff) is probably the simplest and most robust for now.
@@ -7,6 +7,7 @@ | |||
import StdlibUnittest | |||
import CTensorFlow | |||
import TensorFlow | |||
import SwiftOnoneSupport |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is this needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Without this, the test fails with a linker error when TensorFlow is not optimized. I'm not sure why the test passes without this in Piper though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This sounds suspicious. Should we add a TODO/bug to investigate it more?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. I added a todo.
After this PR lands, I think that piper and external builds will run all of the same tests, except that piper also runs tests against gpus and tpus, while external builds don't. |
Great! |
2acdeb5
to
824519a
Compare
@swift-ci please test tensorflow |
@swift-ci please test tensorflow |
1 similar comment
@swift-ci please test tensorflow |
I encountered some test failures while exporting to Piper. 1. A change to `Ops.swift` accidentally removed the `Tensor<Float>()`. This breaks GPU tests, which we do not run in our external builds yet. 2. The SIL in `test/TensorFlow/integration.swift` changed a bit. We did not notice because `test/TensorFlow/integration.swift` does not run in our external builds. 3. `test/TensorFlow/deabstraction_finished.swift` fails on Piper, because Piper builds `stdlib/public/TensorFlow` with optimizations turned off, and constexpr folding can't deal with all the non-optimized stuff yet. I fixed 1 & 2 and ignored 3 (since @lattner's changes will probably repair that soon). I also turned off optimizations in `stdlib/public/TensorFlow`, and enabled a few new tests in external, so that differences between external and piper and less likely to bite us in the future.
I encountered some test failures while exporting to Piper. 1. A change to `Ops.swift` accidentally removed the `Tensor<Float>()`. This breaks GPU tests, which we do not run in our external builds yet. 2. The SIL in `test/TensorFlow/integration.swift` changed a bit. We did not notice because `test/TensorFlow/integration.swift` does not run in our external builds. 3. `test/TensorFlow/deabstraction_finished.swift` fails on Piper, because Piper builds `stdlib/public/TensorFlow` with optimizations turned off, and constexpr folding can't deal with all the non-optimized stuff yet. I fixed 1 & 2 and ignored 3 (since @lattner's changes will probably repair that soon). I also turned off optimizations in `stdlib/public/TensorFlow`, and enabled a few new tests in external, so that differences between external and piper and less likely to bite us in the future.
I encountered some test failures while exporting to Piper. 1. A change to `Ops.swift` accidentally removed the `Tensor<Float>()`. This breaks GPU tests, which we do not run in our external builds yet. 2. The SIL in `test/TensorFlow/integration.swift` changed a bit. We did not notice because `test/TensorFlow/integration.swift` does not run in our external builds. 3. `test/TensorFlow/deabstraction_finished.swift` fails on Piper, because Piper builds `stdlib/public/TensorFlow` with optimizations turned off, and constexpr folding can't deal with all the non-optimized stuff yet. I fixed 1 & 2 and ignored 3 (since @lattner's changes will probably repair that soon). I also turned off optimizations in `stdlib/public/TensorFlow`, and enabled a few new tests in external, so that differences between external and piper and less likely to bite us in the future.
I encountered some test failures while exporting to Piper. 1. A change to `Ops.swift` accidentally removed the `Tensor<Float>()`. This breaks GPU tests, which we do not run in our external builds yet. 2. The SIL in `test/TensorFlow/integration.swift` changed a bit. We did not notice because `test/TensorFlow/integration.swift` does not run in our external builds. 3. `test/TensorFlow/deabstraction_finished.swift` fails on Piper, because Piper builds `stdlib/public/TensorFlow` with optimizations turned off, and constexpr folding can't deal with all the non-optimized stuff yet. I fixed 1 & 2 and ignored 3 (since @lattner's changes will probably repair that soon). I also turned off optimizations in `stdlib/public/TensorFlow`, and enabled a few new tests in external, so that differences between external and piper and less likely to bite us in the future.
I encountered some test failures while exporting to Piper.
Ops.swift
accidentally removed theTensor<Float>()
. This breaks GPU tests, which we do not run in our external builds yet.test/TensorFlow/integration.swift
changed a bit. We did not notice becausetest/TensorFlow/integration.swift
does not run in our external builds.test/TensorFlow/deabstraction_finished.swift
fails on Piper, because Piper buildsstdlib/public/TensorFlow
with optimizations turned off, and constexpr folding can't deal with all the non-optimized stuff yet.I fixed 1 & 2 and ignored 3 (since @lattner's changes will probably repair that soon).
I also turned off optimizations in
stdlib/public/TensorFlow
, and enabled a few new tests in external, so that differences between external and piper and less likely to bite us in the future.