Skip to content

[libc] fix atomic and apply an explicit check on unique object representations #119715

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Dec 16, 2024

Conversation

SchrodingerZhu
Copy link
Contributor

@SchrodingerZhu SchrodingerZhu commented Dec 12, 2024

  • align alignment requirement with atomic_ref in libc++, which is more
    modern

  • use addressof to avoid problems if future users override their
    address operator

  • check has_unique_object_representations and rejects padded objects
    in libc's implementations

- align alignment requirement with atomic_ref in libc++, which is more
  modern

- use `addressof` to avoid problems if future users override their
  address operator

- check `has_unique_object_representations` and rejects padded objects
  in libc's implementations
@llvmbot llvmbot added the libc label Dec 12, 2024
@llvmbot
Copy link
Member

llvmbot commented Dec 12, 2024

@llvm/pr-subscribers-libc

Author: Schrodinger ZHU Yifan (SchrodingerZhu)

Changes
  • align alignment requirement with atomic_ref in libc++, which is more
    modern

  • use addressof to avoid problems if future users override their
    address operator

  • check has_unique_object_representations and rejects padded objects
    in libc's implementations


Full diff: https://github.com/llvm/llvm-project/pull/119715.diff

3 Files Affected:

  • (modified) libc/src/__support/CPP/atomic.h (+47-30)
  • (modified) libc/src/__support/CPP/type_traits.h (+2)
  • (added) libc/src/__support/CPP/type_traits/has_unique_object_representations.h (+28)
diff --git a/libc/src/__support/CPP/atomic.h b/libc/src/__support/CPP/atomic.h
index a9fc7e61610ccc..dfe1dc9591e12b 100644
--- a/libc/src/__support/CPP/atomic.h
+++ b/libc/src/__support/CPP/atomic.h
@@ -9,6 +9,7 @@
 #ifndef LLVM_LIBC_SRC___SUPPORT_CPP_ATOMIC_H
 #define LLVM_LIBC_SRC___SUPPORT_CPP_ATOMIC_H
 
+#include "src/__support/CPP/type_traits/has_unique_object_representations.h"
 #include "src/__support/macros/attributes.h"
 #include "src/__support/macros/config.h"
 #include "src/__support/macros/properties/architectures.h"
@@ -47,12 +48,11 @@ template <typename T> struct Atomic {
                 "constructible, move constructible, copy assignable, "
                 "and move assignable.");
 
+  static_assert(cpp::has_unique_object_representations_v<T>,
+                "atomic<T> in libc only support types whose values has unique "
+                "object representations.");
+
 private:
-  // The value stored should be appropriately aligned so that
-  // hardware instructions used to perform atomic operations work
-  // correctly.
-  static constexpr int ALIGNMENT = sizeof(T) > alignof(T) ? sizeof(T)
-                                                          : alignof(T);
   // type conversion helper to avoid long c++ style casts
   LIBC_INLINE static int order(MemoryOrder mem_ord) {
     return static_cast<int>(mem_ord);
@@ -62,6 +62,18 @@ template <typename T> struct Atomic {
     return static_cast<int>(mem_scope);
   }
 
+  LIBC_INLINE static T *addressof(T &ref) { return __builtin_addressof(ref); }
+
+  // require types that are 1, 2, 4, 8, or 16 bytes in length to be aligned to
+  // at least their size to be potentially
+  // used lock-free
+  LIBC_INLINE_VAR static constexpr size_t MIN_ALIGNMENT =
+      (sizeof(T) & (sizeof(T) - 1)) || (sizeof(T) > 16) ? 0 : sizeof(T);
+
+  LIBC_INLINE_VAR static constexpr size_t ALIGNMENT = alignof(T) > MIN_ALIGNMENT
+                                                          ? alignof(T)
+                                                          : MIN_ALIGNMENT;
+
 public:
   using value_type = T;
 
@@ -87,9 +99,10 @@ template <typename T> struct Atomic {
        [[maybe_unused]] MemoryScope mem_scope = MemoryScope::DEVICE) {
     T res;
 #if __has_builtin(__scoped_atomic_load)
-    __scoped_atomic_load(&val, &res, order(mem_ord), scope(mem_scope));
+    __scoped_atomic_load(addressof(val), addressof(res), order(mem_ord),
+                         scope(mem_scope));
 #else
-    __atomic_load(&val, &res, order(mem_ord));
+    __atomic_load(addressof(val), addressof(res), order(mem_ord));
 #endif
     return res;
   }
@@ -104,9 +117,10 @@ template <typename T> struct Atomic {
   store(T rhs, MemoryOrder mem_ord = MemoryOrder::SEQ_CST,
         [[maybe_unused]] MemoryScope mem_scope = MemoryScope::DEVICE) {
 #if __has_builtin(__scoped_atomic_store)
-    __scoped_atomic_store(&val, &rhs, order(mem_ord), scope(mem_scope));
+    __scoped_atomic_store(addressof(val), addressof(rhs), order(mem_ord),
+                          scope(mem_scope));
 #else
-    __atomic_store(&val, &rhs, order(mem_ord));
+    __atomic_store(addressof(val), addressof(rhs), order(mem_ord));
 #endif
   }
 
@@ -114,8 +128,9 @@ template <typename T> struct Atomic {
   LIBC_INLINE bool compare_exchange_strong(
       T &expected, T desired, MemoryOrder mem_ord = MemoryOrder::SEQ_CST,
       [[maybe_unused]] MemoryScope mem_scope = MemoryScope::DEVICE) {
-    return __atomic_compare_exchange(&val, &expected, &desired, false,
-                                     order(mem_ord), order(mem_ord));
+    return __atomic_compare_exchange(addressof(val), addressof(expected),
+                                     addressof(desired), false, order(mem_ord),
+                                     order(mem_ord));
   }
 
   // Atomic compare exchange (separate success and failure memory orders)
@@ -123,17 +138,18 @@ template <typename T> struct Atomic {
       T &expected, T desired, MemoryOrder success_order,
       MemoryOrder failure_order,
       [[maybe_unused]] MemoryScope mem_scope = MemoryScope::DEVICE) {
-    return __atomic_compare_exchange(&val, &expected, &desired, false,
-                                     order(success_order),
-                                     order(failure_order));
+    return __atomic_compare_exchange(
+        addressof(val), addressof(expected), addressof(desired), false,
+        order(success_order), order(failure_order));
   }
 
   // Atomic compare exchange (weak version)
   LIBC_INLINE bool compare_exchange_weak(
       T &expected, T desired, MemoryOrder mem_ord = MemoryOrder::SEQ_CST,
       [[maybe_unused]] MemoryScope mem_scope = MemoryScope::DEVICE) {
-    return __atomic_compare_exchange(&val, &expected, &desired, true,
-                                     order(mem_ord), order(mem_ord));
+    return __atomic_compare_exchange(addressof(val), addressof(expected),
+                                     addressof(desired), true, order(mem_ord),
+                                     order(mem_ord));
   }
 
   // Atomic compare exchange (weak version with separate success and failure
@@ -142,9 +158,9 @@ template <typename T> struct Atomic {
       T &expected, T desired, MemoryOrder success_order,
       MemoryOrder failure_order,
       [[maybe_unused]] MemoryScope mem_scope = MemoryScope::DEVICE) {
-    return __atomic_compare_exchange(&val, &expected, &desired, true,
-                                     order(success_order),
-                                     order(failure_order));
+    return __atomic_compare_exchange(
+        addressof(val), addressof(expected), addressof(desired), true,
+        order(success_order), order(failure_order));
   }
 
   LIBC_INLINE T
@@ -152,10 +168,11 @@ template <typename T> struct Atomic {
            [[maybe_unused]] MemoryScope mem_scope = MemoryScope::DEVICE) {
     T ret;
 #if __has_builtin(__scoped_atomic_exchange)
-    __scoped_atomic_exchange(&val, &desired, &ret, order(mem_ord),
-                             scope(mem_scope));
+    __scoped_atomic_exchange(addressof(val), addressof(desired), addressof(ret),
+                             order(mem_ord), scope(mem_scope));
 #else
-    __atomic_exchange(&val, &desired, &ret, order(mem_ord));
+    __atomic_exchange(addressof(val), addressof(desired), addressof(ret),
+                      order(mem_ord));
 #endif
     return ret;
   }
@@ -165,10 +182,10 @@ template <typename T> struct Atomic {
             [[maybe_unused]] MemoryScope mem_scope = MemoryScope::DEVICE) {
     static_assert(cpp::is_integral_v<T>, "T must be an integral type.");
 #if __has_builtin(__scoped_atomic_fetch_add)
-    return __scoped_atomic_fetch_add(&val, increment, order(mem_ord),
+    return __scoped_atomic_fetch_add(addressof(val), increment, order(mem_ord),
                                      scope(mem_scope));
 #else
-    return __atomic_fetch_add(&val, increment, order(mem_ord));
+    return __atomic_fetch_add(addressof(val), increment, order(mem_ord));
 #endif
   }
 
@@ -177,10 +194,10 @@ template <typename T> struct Atomic {
            [[maybe_unused]] MemoryScope mem_scope = MemoryScope::DEVICE) {
     static_assert(cpp::is_integral_v<T>, "T must be an integral type.");
 #if __has_builtin(__scoped_atomic_fetch_or)
-    return __scoped_atomic_fetch_or(&val, mask, order(mem_ord),
+    return __scoped_atomic_fetch_or(addressof(val), mask, order(mem_ord),
                                     scope(mem_scope));
 #else
-    return __atomic_fetch_or(&val, mask, order(mem_ord));
+    return __atomic_fetch_or(addressof(val), mask, order(mem_ord));
 #endif
   }
 
@@ -189,10 +206,10 @@ template <typename T> struct Atomic {
             [[maybe_unused]] MemoryScope mem_scope = MemoryScope::DEVICE) {
     static_assert(cpp::is_integral_v<T>, "T must be an integral type.");
 #if __has_builtin(__scoped_atomic_fetch_and)
-    return __scoped_atomic_fetch_and(&val, mask, order(mem_ord),
+    return __scoped_atomic_fetch_and(addressof(val), mask, order(mem_ord),
                                      scope(mem_scope));
 #else
-    return __atomic_fetch_and(&val, mask, order(mem_ord));
+    return __atomic_fetch_and(addressof(val), mask, order(mem_ord));
 #endif
   }
 
@@ -201,10 +218,10 @@ template <typename T> struct Atomic {
             [[maybe_unused]] MemoryScope mem_scope = MemoryScope::DEVICE) {
     static_assert(cpp::is_integral_v<T>, "T must be an integral type.");
 #if __has_builtin(__scoped_atomic_fetch_sub)
-    return __scoped_atomic_fetch_sub(&val, decrement, order(mem_ord),
+    return __scoped_atomic_fetch_sub(addressof(val), decrement, order(mem_ord),
                                      scope(mem_scope));
 #else
-    return __atomic_fetch_sub(&val, decrement, order(mem_ord));
+    return __atomic_fetch_sub(addressof(val), decrement, order(mem_ord));
 #endif
   }
 
diff --git a/libc/src/__support/CPP/type_traits.h b/libc/src/__support/CPP/type_traits.h
index b9bc5b85684415..2b3914521cbe98 100644
--- a/libc/src/__support/CPP/type_traits.h
+++ b/libc/src/__support/CPP/type_traits.h
@@ -18,6 +18,7 @@
 #include "src/__support/CPP/type_traits/decay.h"
 #include "src/__support/CPP/type_traits/enable_if.h"
 #include "src/__support/CPP/type_traits/false_type.h"
+#include "src/__support/CPP/type_traits/has_unique_object_representations.h"
 #include "src/__support/CPP/type_traits/integral_constant.h"
 #include "src/__support/CPP/type_traits/invoke.h"
 #include "src/__support/CPP/type_traits/invoke_result.h"
@@ -65,4 +66,5 @@
 #include "src/__support/CPP/type_traits/type_identity.h"
 #include "src/__support/CPP/type_traits/void_t.h"
 
+
 #endif // LLVM_LIBC_SRC___SUPPORT_CPP_TYPE_TRAITS_H
diff --git a/libc/src/__support/CPP/type_traits/has_unique_object_representations.h b/libc/src/__support/CPP/type_traits/has_unique_object_representations.h
new file mode 100644
index 00000000000000..baed7da69d3199
--- /dev/null
+++ b/libc/src/__support/CPP/type_traits/has_unique_object_representations.h
@@ -0,0 +1,28 @@
+//===-- has_unique_object_representations type_traits ------------*- C++-*-===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+#ifndef LLVM_LIBC_SRC___SUPPORT_CPP_TYPE_TRAITS_HAS_UNIQUE_OBJECT_REPRESENTATIONS_H
+#define LLVM_LIBC_SRC___SUPPORT_CPP_TYPE_TRAITS_HAS_UNIQUE_OBJECT_REPRESENTATIONS_H
+
+#include "src/__support/CPP/type_traits/integral_constant.h"
+#include "src/__support/macros/config.h"
+
+namespace LIBC_NAMESPACE_DECL {
+namespace cpp {
+
+template <class T>
+struct has_unique_object_representations
+    : public integral_constant<bool, __has_unique_object_representations(T)> {};
+
+template <class T>
+LIBC_INLINE_VAR constexpr bool has_unique_object_representations_v =
+    has_unique_object_representations<T>::value;
+
+} // namespace cpp
+} // namespace LIBC_NAMESPACE_DECL
+
+#endif // LLVM_LIBC_SRC___SUPPORT_CPP_TYPE_TRAITS_HAS_UNIQUE_OBJECT_REPRESENTATIONS_H

Copy link

github-actions bot commented Dec 12, 2024

✅ With the latest revision this PR passed the C/C++ code formatter.

@@ -62,6 +62,18 @@ template <typename T> struct Atomic {
return static_cast<int>(mem_scope);
}

LIBC_INLINE static T *addressof(T &ref) { return __builtin_addressof(ref); }
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need static, here and below?

Copy link
Contributor Author

@SchrodingerZhu SchrodingerZhu Dec 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

constexpr can only be static, otherwise the compiler would complain.
The function here can be static or not. As it does not depend on the member field so static seems more suitable.

// at least their size to be potentially
// used lock-free
LIBC_INLINE_VAR static constexpr size_t MIN_ALIGNMENT =
(sizeof(T) & (sizeof(T) - 1)) || (sizeof(T) > 16) ? 0 : sizeof(T);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this expression re-used from libc++?

Copy link
Contributor Author

@SchrodingerZhu SchrodingerZhu Dec 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is copied from libc++.

static constexpr size_t __min_alignment = (sizeof(_Tp) & (sizeof(_Tp) - 1)) || (sizeof(_Tp) > 16) ? 0 : sizeof(_Tp);

Also, notice libc++ is having some troubles with atomics as __builtin_clear_padding is not implemented. They will need to refactor their std::atomic implementation into a similar form as atomic_ref. Hence, code changes in their part are expected. That's why I prefer atomic_ref.

@SchrodingerZhu SchrodingerZhu changed the title [libc] fix and explicit atomic difference from c++ [libc] fix atomic and apply an explicit check on unique object representations Dec 12, 2024
@SchrodingerZhu SchrodingerZhu merged commit f75c846 into llvm:main Dec 16, 2024
11 checks passed
@SchrodingerZhu SchrodingerZhu deleted the libc/atomic-fix2 branch December 16, 2024 17:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants