Skip to content

[SYCL] Do module cleanup in sycl-post-link even if there were no splits #10863

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Aug 24, 2023

Conversation

sarnex
Copy link
Contributor

@sarnex sarnex commented Aug 17, 2023

This allows for unused symbol removal through GlobalDCE that would happen if there were splits. Right now we have a mismatch where split modules have unused symbols removed but copied modules (no splits) do not.

This allows for unused symbol removal that would happen if there were splits.

Signed-off-by: Sarnie, Nick <[email protected]>
@sarnex sarnex marked this pull request as ready for review August 17, 2023 20:46
@sarnex sarnex requested a review from a team as a code owner August 17, 2023 20:46
@sarnex sarnex requested a review from AlexeySachkov August 17, 2023 20:47
Copy link
Contributor

@MrSidims MrSidims left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not really an expert in the modified code. Can we have ; CHECK: _ZTSZ4mainEUlT_E0_ to give me a hint, that the change won't affect kernels?

@maksimsab
Copy link
Contributor

@sarnex Could you please give a clue why internal linkage is replaced with weak_odr?

@sarnex
Copy link
Contributor Author

sarnex commented Aug 21, 2023

@maksimsab sure, it's because those globals are now getting optimized out by globalDCE. the point of those tests is to test something else, so i changed the linkage as to maintain the purpose of the test. note that if there were any splits (like if they used fp64), they would already be optimized out in the splits, so it's just because there happen to be no splits that they weren't optimized.

@MrSidims sure, i will add this

@sarnex sarnex requested a review from maksimsab August 22, 2023 14:28
@sarnex
Copy link
Contributor Author

sarnex commented Aug 23, 2023

@AlexeySachkov Hey, do you mind taking a look at this one to make sure it's not insane? Thanks

Copy link
Contributor

@asudarsa asudarsa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Nick. LGTM. Seems like something that was missed earlier.
One question: if we are now doing cleanup in ALL cases, can we just call the cleanup routine in one site (in sycl-post-link, right after calling getDeviceCodeSplitter)?
I will approve nevertheless. This is just something that you can address here if possible.

Thanks

@sarnex
Copy link
Contributor Author

sarnex commented Aug 24, 2023

@asudarsa We should be able to do that but I will do it in a follow up change if that's okay.

@sarnex
Copy link
Contributor Author

sarnex commented Aug 24, 2023

@intel/llvm-gatekeepers can we merge please? the original cuda failure is not related and i tried to rerun it but it seems no runners picked it up

This request was automatically failed because there were no enabled runners online to process the request for more than 1 days.

@sarnex
Copy link
Contributor Author

sarnex commented Aug 24, 2023

@intel/llvm-gatekeepers CUDA passed on rerun so all CI is passing now, can we merge? Thanks for rerunning the CI!

@steffenlarsen steffenlarsen merged commit 6d952a6 into intel:sycl Aug 24, 2023
@sarnex
Copy link
Contributor Author

sarnex commented Aug 24, 2023

@asudarsa It looks like even if we move the cleanup call outside of ModuleSplitter, we will still need to call it in multiple sites due to the special ESIMD handling in handleESIMD, so I'm not sure we get any code cleanup benefit, so I'm planning to leave the code as is if that's okay.

@asudarsa
Copy link
Contributor

Ah. Ok.....Thanks for considering it Nick. It was again not a PR blocker. Thanks again for fixing this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants