Skip to content

Sema: Optimize Constraint layout to save 24 bytes #79882

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Mar 11, 2025

Conversation

slavapestov
Copy link
Contributor

@slavapestov slavapestov commented Mar 10, 2025

While building the standard library, the total size of all constraint solver arenas allocated goes down from 638Mb to 596Mb.

Fixes rdar://146459456

…mplementation

Normal conformances, self conformances, and availability contexts cannot
contain types with type variables, so there is no reason to duplicate
the uniquing maps between the permanent arena and solver arena.
Copy link
Contributor

@xedin xedin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you!

The largest union member was for SyntacticElement constraints;
let's tail-allocate the ContextualTypeInfo instead of storing
one in every constraint.
@slavapestov slavapestov changed the title Sema: Optimize Constraint layout to save 16 bytes Sema: Optimize Constraint layout to save 24 bytes Mar 10, 2025
If the precondition doesn't hold, we will return a pointer to
some random memory, so it's best to always crash since this
indicates something is seriously wrong.
@slavapestov
Copy link
Contributor Author

@swift-ci Please smoke test

@slavapestov
Copy link
Contributor Author

@swift-ci Please test source compatibility

@slavapestov slavapestov enabled auto-merge March 11, 2025 18:12
@slavapestov slavapestov merged commit 4c60e03 into swiftlang:main Mar 11, 2025
4 of 6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants