Skip to content

Fix a bug in CopyForwarding. Bailout during destoy hoisting. #19103

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Sep 4, 2018
Merged

Fix a bug in CopyForwarding. Bailout during destoy hoisting. #19103

merged 1 commit into from
Sep 4, 2018

Conversation

atrick
Copy link
Contributor

@atrick atrick commented Sep 1, 2018

Once the algorithm has begun hoisting destroys globally, there's no
way to cleanly bailout. The previous attempt to bailout could result
in an assert or lost destroy in release mode.

This is continued fall-out from changes in the previous release to
upstream SILGen or mandatory passes, such as PredictableMemOps, that
no longer preserve natural variable lifetimes.

In this case, we end up with SIL like this before CopyForwarding:

bb(%arg)
%local_addr = alloc_stack
store %arg to %local_addr
%payload = switch_enum(%arg)
retain %arg
store %arg to %some_addr
destroy_addr %local_addr
release_value %arg

We're attempting to hoist the destroy_addr to its last use, but can't
because the lifetimes of the alloc_stack (%local_addr) and the value
being stored on the stack (%arg) have become mixed up by an upstream
pass. We actually detect this situation now in order to bail-out of
destroy hoisting. Sadly, the bailout might only partially recover in
the case of interesting control flow, as happens in the test case's
Graph.init function. This triggers an assert, but in release mode it
simply drops the destroy.

Fixed rdar://problem/43888666 [SR-8526]: Memory leak after switch in
release configuration.

Once the algorithm has begun hoisting destroys globally, there's no
way to cleanly bailout. The previous attempt to bailout could result
in an assert or lost destroy in release mode.

This is continued fall-out from changes in the previous release to
upstream SILGen or mandatory passes, such as PredictableMemOps, that
no longer preserve natural variable lifetimes.

In this case, we end up with SIL like this before CopyForwarding:

bb(%arg)
%local_addr = alloc_stack
store %arg to %local_addr
%payload = switch_enum(%arg)
retain %arg
store %arg to %some_addr
destroy_addr %local_addr
release_value %arg

We're attempting to hoist the destroy_addr to its last use, but can't
because the lifetimes of the alloc_stack (%local_addr) and the value
being stored on the stack (%arg) have become mixed up by an upstream
pass. We actually detect this situation now in order to bail-out of
destroy hoisting. Sadly, the bailout might only partially recover in
the case of interesting control flow, as happens in the test case's
Graph.init function. This triggers an assert, but in release mode it
simply drops the destroy.

Fixed <rdar://problem/43888666> [SR-8526]: Memory leak after switch in
release configuration.
@atrick
Copy link
Contributor Author

atrick commented Sep 1, 2018

@swift-ci test.

@swift-ci
Copy link
Contributor

swift-ci commented Sep 1, 2018

Build failed
Swift Test OS X Platform
Git Sha - f764b8b

Copy link
Contributor

@aschwaighofer aschwaighofer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@atrick
Copy link
Contributor Author

atrick commented Sep 4, 2018

@swift-ci test.

@swift-ci
Copy link
Contributor

swift-ci commented Sep 4, 2018

Build failed
Swift Test Linux Platform
Git Sha - f764b8b

@atrick
Copy link
Contributor Author

atrick commented Sep 4, 2018

lldb-Suite :: api/check_public_api_headers/TestPublicAPIHeaders.py' FAILED

@atrick atrick merged commit aef79d0 into swiftlang:master Sep 4, 2018
@atrick atrick deleted the cpf-bug branch October 16, 2018 16:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants