Skip to content

[compiler-rt][builtins] Add opt-in pthread_mutex_t locks to libatomic #95326

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 13, 2024

Conversation

Logikable
Copy link
Contributor

When an uninstrumented libatomic is used with a TSan instrumented memcpy, TSan may report a data race in circumstances where writes are arguably safe.

This occurs because __atomic_compare_exchange won't be instrumented in an uninstrumented libatomic, so TSan doesn't know that the subsequent memcpy is race-free.

On the other hand, pthread_mutex_(un)lock will be intercepted by TSan, meaning an uninstrumented libatomic will not report this false-positive.

pthread_mutexes also may try a number of different strategies to acquire the lock, which may bound the amount of time a thread has to wait for a lock during contention.

While pthread_mutex_lock has a larger overhead (due to the function call and some dispatching), a dispatch to libatomic already predicates a lack of performance guarantees.

When an uninstrumented libatomic is used with a TSan instrumented
memcpy, TSan may report a data race in circumstances where writes are
arguably safe.

This occurs because __atomic_compare_exchange won't be instrumented in
an uninstrumented libatomic, so TSan doesn't know that the subsequent
memcpy is race-free.

On the other hand, pthread_mutex_(un)lock will be intercepted by TSan,
meaning an uninstrumented libatomic will not report this false-positive.

pthread_mutexes also may try a number of different strategies to acquire
the lock, which may bound the amount of time a thread has to wait for a
lock during contention.

While pthread_mutex_lock has a larger overhead (due to the function
call and some dispatching), a dispatch to libatomic already predicates
a lack of performance guarantees.
@@ -237,6 +237,14 @@ if(NOT FUCHSIA AND NOT COMPILER_RT_BAREMETAL_BUILD)
)
endif()

option(COMPILER_RT_LIBATOMIC_USE_PTHREAD
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps this can be the default for Linux if not Android.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can try to switch that after this PR lands.

@Logikable Logikable merged commit 6499c5d into llvm:main Jun 13, 2024
9 checks passed
EthanLuisMcDonough pushed a commit to EthanLuisMcDonough/llvm-project that referenced this pull request Aug 13, 2024
…llvm#95326)

When an uninstrumented libatomic is used with a TSan instrumented
memcpy, TSan may report a data race in circumstances where writes are
arguably safe.

This occurs because __atomic_compare_exchange won't be instrumented in
an uninstrumented libatomic, so TSan doesn't know that the subsequent
memcpy is race-free.

On the other hand, pthread_mutex_(un)lock will be intercepted by TSan,
meaning an uninstrumented libatomic will not report this false-positive.

pthread_mutexes also may try a number of different strategies to acquire
the lock, which may bound the amount of time a thread has to wait for a
lock during contention.

While pthread_mutex_lock has a larger overhead (due to the function call
and some dispatching), a dispatch to libatomic already predicates a lack
of performance guarantees.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants