Skip to content

Enable strict deabstraction in test/integration.swift #18239

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Jul 26, 2018

Conversation

bgogul
Copy link
Contributor

@bgogul bgogul commented Jul 26, 2018

Partially resolves SR-8304.

@@ -49,6 +49,34 @@ public func testTensor(a: Tensor<Float>, b: Tensor<Float>) {
// CHECK: [[FINISHFN:%.*]] = function_ref @_swift_tfc_FinishTensorComputation
// CHECK-NEXT: apply [[FINISHFN]]([[PROGRAM]],

// STRICTDA-LABEL: --- TFPartition Accelerator Result: {{.*}}testTensor{{.*}}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's pretty hard to review these large test diffs -- would it be possible and better if we simply cut over to using strict DA for this test case, and do not generate duplicated test outputs via the STRICTDA prefix?

see https://github.com/apple/swift/pull/18038/files#diff-3ed06a3f2310d2bb22ecdbfc56304e35 as an example, where the diffs are much smaller and easier to verify.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can certainly do that (and actually would have saved me quite a bit of work)!

Is there still value in testing the old path where strict deabstraction is disabled?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that we are very close to getting strict deabstraction enabled, we can simply check the strict deabstraction. Done. The PR should be easy to review.

@@ -239,15 +230,14 @@ public func test_multiple_ifs(status: Bool) {
_hostOp(a)
}

// The results are different here because compiler optimizes well
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mhong please make sure this is fine.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm the two if's got merged, probably by the performance pipeline -- if so we'll (temporarily) lose such optimization once we move up the partition pass, and cause another test diff.

So it might be better to rewrite this test with 2 different cond values, like public func test_multiple_ifs(status1: Bool, status2: Bool)

(It could be useful to have separate tests on what the optimizer can do, but this "integration" test suite is not the best place for that.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. I just changed the second condition instead of adding a new variable.

@@ -503,17 +480,17 @@ public func testNoInlineUser() {
noInlineUser(x)
}

// NOTE: the output for STRICT_DA does not have strong_retain or release.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mhong Please make sure this is also OK.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good!

Copy link

@mhong mhong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! The patch is now much simpler.

@@ -503,17 +480,17 @@ public func testNoInlineUser() {
noInlineUser(x)
}

// NOTE: the output for STRICT_DA does not have strong_retain or release.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good!

// CHECK-NEXT: %5 = metatype $@thick Float.Type
// CHECK-NEXT: %6 = string_literal utf8 "/device:CPU:0"
// CHECK-NEXT: %7 = builtin "__tfop_Sub,$in,$in,T,__device"(%4 : $TensorHandle<Float>, %4 : $TensorHandle<Float>, %5 : $@thick Float.Type, %6 : $Builtin.RawPointer) : $TensorHandle<Float>
// CHECK-NEXT: [[A:%.*]] = graph_op "Add,i,i"(%0 : $TensorHandle<Float>, %0 : $TensorHandle<Float>)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we add back the checks on attrs like T and __device? Doing it on the first graph_op inst (or some of the insts) should be sufficient.

one example is:

  %2 = graph_op "Add,i,i"(%0 : $TensorHandle<Float>, %0 : $TensorHandle<Float>) {T: $Float, __device: "/device:CPU:0"} : $TensorHandle<Float> // users: %3, %3

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@@ -239,15 +230,14 @@ public func test_multiple_ifs(status: Bool) {
_hostOp(a)
}

// The results are different here because compiler optimizes well
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm the two if's got merged, probably by the performance pipeline -- if so we'll (temporarily) lose such optimization once we move up the partition pass, and cause another test diff.

So it might be better to rewrite this test with 2 different cond values, like public func test_multiple_ifs(status1: Bool, status2: Bool)

(It could be useful to have separate tests on what the optimizer can do, but this "integration" test suite is not the best place for that.)

@bgogul bgogul merged commit 3b1a012 into swiftlang:tensorflow Jul 26, 2018
@bgogul bgogul deleted the enable_strict_integration_8304 branch July 27, 2018 06:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants