Skip to content

[Concurrency] Eliminate StatusRecordLockRecord. #80316

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Mar 28, 2025

Conversation

mikeash
Copy link
Contributor

@mikeash mikeash commented Mar 26, 2025

Draft PR, this still needs a Win32 implementation of RecursiveMutex.

Move to a recursive lock inline in the Task. This avoids the need to allocate a lock record and simplifies the code somewhat.

Change Task's OpaquePrivateStorage to compute its size at build time based on the sizes of its components, rather than having it be a fixed size. It appears that the fixed size was intended to be part of the ABI, but that didn't happen and we're free to change this size. We need to expand it slightly when using pthread_mutex as the recursive lock, as pthread_mutex is pretty big. Other recursive locks allow it to shrink slightly.

We don't have a recursive mutex in our Threading support code, so add a RecursiveMutex type.

rdar://113898653

Move to a recursive lock inline in the Task. This avoids the need to allocate a lock record and simplifies the code somewhat.

Change Task's OpaquePrivateStorage to compute its size at build time based on the sizes of its components, rather than having it be a fixed size. It appears that the fixed size was intended to be part of the ABI, but that didn't happen and we're free to change this size. We need to expand it slightly when using pthread_mutex as the recursive lock, as pthread_mutex is pretty big. Other recursive locks allow it to shrink slightly.

We don't have a recursive mutex in our Threading support code, so add a RecursiveMutex type.

rdar://113898653
Copy link
Contributor

@ktoso ktoso left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! Great to use a plain old lock rather than our own record lock dance there, much nicer.

@mikeash
Copy link
Contributor Author

mikeash commented Mar 26, 2025

@swift-ci Please Build Toolchain Linux Platform

@mikeash
Copy link
Contributor Author

mikeash commented Mar 26, 2025

@swift-ci Please Build Toolchain macOS Platform

We're using `CRITICAL_SECTION` here, as that supports recursion.

rdar://113898653
@al45tair
Copy link
Contributor

@swift-ci Please test Windows platform

Comment on lines 247 to 250
RecursiveMutex(const Mutex &) = delete;
RecursiveMutex &operator=(const Mutex &) = delete;
RecursiveMutex(Mutex &&) = delete;
RecursiveMutex &operator=(Mutex &&) = delete;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think these want to take RecursiveMutex rather than Mutex arguments.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy/pasta trouble, fixed all of these now.


using recursive_mutex_handle = ::pthread_mutex_t;

inline void recursive_mutex_init(mutex_handle &handle, bool checked = false) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It so happens that mutex_handle and recursive_mutex_handle are the same here, but really these should take recursive_mutex_handle.


using recursive_mutex_handle = ::pthread_mutex_t;

inline void recursive_mutex_init(mutex_handle &handle, bool checked = false) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similarly to the Linux implementation, these should probably take recursive_mutex_handle arguments.

mikeash and others added 2 commits March 27, 2025 14:31
`ULONG_PTR` is defined slightly differently on `_WIN64` builds.

rdar://113898653
@al45tair
Copy link
Contributor

@swift-ci Please test Windows platform

@mikeash mikeash marked this pull request as ready for review March 27, 2025 19:15
@mikeash mikeash requested a review from a team as a code owner March 27, 2025 19:15
@mikeash
Copy link
Contributor Author

mikeash commented Mar 27, 2025

@swift-ci please test

@mikeash
Copy link
Contributor Author

mikeash commented Mar 28, 2025

embedded/concurrency-leaks.swift fails because it checks for the lock record being allocated and destroyed. Removing those lines since that no longer happens.

@mikeash
Copy link
Contributor Author

mikeash commented Mar 28, 2025

@swift-ci please test

@mikeash mikeash enabled auto-merge March 28, 2025 14:00
@mikeash
Copy link
Contributor Author

mikeash commented Mar 28, 2025

Windows failed due to an unrelated issue on main which is now fixed, retrying.

@mikeash
Copy link
Contributor Author

mikeash commented Mar 28, 2025

@swift-ci please test windows platform

@mikeash mikeash merged commit ecdcb82 into swiftlang:main Mar 28, 2025
4 of 5 checks passed
@@ -298,7 +298,18 @@ class AsyncTask : public Job {

/// Private storage for the use of the runtime.
struct alignas(2 * alignof(void*)) OpaquePrivateStorage {
void *Storage[14];
#if SWIFT_CONCURRENCY_ENABLE_PRIORITY_ESCALATION && SWIFT_POINTER_IS_4_BYTES

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the late comment, I'm just slightly confused why there's a #if condition where both paths are identical in terms of text

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well spotted. The reason is, I messed it up. I have a fix going here: #80447

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants