Skip to content

[libc] Implement simple lock-free stack data structure #83026

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

jhuber6
Copy link
Contributor

@jhuber6 jhuber6 commented Feb 26, 2024

Summary:
This patch implements a FixedStack which represents a lock-free stack
implemented using a fixed-size memory buffer. The utility for this class
is to implement a data structure that the GPU implementation can use
incases where a mutex is normally used. We cannot implement a
general-purpose mutex on the GPU due to the lack of a guaranteed fair
thread scheduler.

This lock free stack is implemented as a pair of 'pointers' to a used
and free stack. The pointers here are simple 32-bit indexes into an
underlying fixed-size memory buffer. The free stack is initialized to
point to the entire buffer, i.e. 1, 2, 3, ..., N with N being used as
a sentinel value. To perform a push operation, we pop a node off of the
free stack and then push it into the used stack and vice-versa for a pop
operation.

The underlying pop implementation relies on continually trying to update
the head pointer to the next node using atomic CAS. The CAS loop will
repeat until it reads the next pointer successfully and the head pointer
has not changed.

The underlying push ipmlementation relies on contiually trying to update
the head pointer to the new node. The CAS loop will reap until we write
to the head pointer and it has not changed.

Both of these implementations rely on detecting whether or not the head
pointer has changed. Simply using indexes we are succeptiable to the ABA
problem as the stack could have been pushed and popped until we are back
to the same index and we have not noticed. For this reason, the 'head'
pointer is augmented with a 32-bit ABA that increments each time it is
updated. This allows the head update to be done in a single 64-bit
atomic update which is supported by hardware.

@llvmbot
Copy link
Member

llvmbot commented Feb 26, 2024

@llvm/pr-subscribers-libc

Author: Joseph Huber (jhuber6)

Changes

Summary:
This patch implements a FixedStack which represents a lock-free stack
implemented using a fixed-size memory buffer. The utility for this class
is to implement a data structure that the GPU implementation can use
incases where a mutex is normally used. We cannot implement a
general-purpose mutex on the GPU due to the lack of a guaranteed fair
thread scheduler.

This lock free stack is implemented as a pair of 'pointers' to a used
and free stack. The pointers here are simple 32-bit indexes into an
underlying fixed-size memory buffer. The free stack is initialized to
point to the entire buffer, i.e. 1, 2, 3, ..., N with N being used as
a sentinel value. To perform a push operation, we pop a node off of the
free stack and then push it into the used stack and vice-versa for a pop
operation.

The underlying pop implementation relies on continually trying to update
the head pointer to the next node using atomic CAS. The CAS loop will
repeat until it reads the next pointer successfully and the head pointer
has not changed.

The underlying push ipmlementation relies on contiually trying to update
the head pointer to the new node. The CAS loop will reap until we write
to the head pointer and it has not changed.

Both of these implementations rely on detecting whether or not the head
pointer has changed. Simply using indexes we are succeptiable to the ABA
problem as the stack could have been pushed and popped until we are back
to the same index and we have not noticed. For this reason, the 'head'
pointer is augmented with a 32-bit ABA that increments each time it is
updated. This allows the head update to be done in a single 64-bit
atomic update which is supported by hardware.


Full diff: https://github.com/llvm/llvm-project/pull/83026.diff

7 Files Affected:

  • (modified) libc/src/__support/CMakeLists.txt (+9)
  • (added) libc/src/__support/fixedstack.h (+142)
  • (modified) libc/test/integration/src/__support/CMakeLists.txt (+4)
  • (added) libc/test/integration/src/__support/gpu/CMakeLists.txt (+19)
  • (added) libc/test/integration/src/__support/gpu/fixed_stack_test.cpp (+75)
  • (modified) libc/test/src/__support/CMakeLists.txt (+10)
  • (added) libc/test/src/__support/fixedstack_test.cpp (+26)
diff --git a/libc/src/__support/CMakeLists.txt b/libc/src/__support/CMakeLists.txt
index 1a4b3e9a2145c0..5288c808ca6bbb 100644
--- a/libc/src/__support/CMakeLists.txt
+++ b/libc/src/__support/CMakeLists.txt
@@ -177,6 +177,15 @@ add_header_library(
     libc.src.__support.CPP.array
 )
 
+add_header_library(
+  fixedstack
+  HDRS
+    fixedstack.h
+  DEPENDS
+    libc.src.__support.CPP.array
+    libc.src.__support.CPP.atomic
+)
+
 add_header_library(
   char_vector
   HDRS
diff --git a/libc/src/__support/fixedstack.h b/libc/src/__support/fixedstack.h
new file mode 100644
index 00000000000000..f0e333dc345689
--- /dev/null
+++ b/libc/src/__support/fixedstack.h
@@ -0,0 +1,142 @@
+//===-- A lock-free data structure for a fixed capacity stack ---*- C++ -*-===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef LLVM_LIBC_SRC___SUPPORT_FIXEDSTACK_H
+#define LLVM_LIBC_SRC___SUPPORT_FIXEDSTACK_H
+
+#include "src/__support/CPP/array.h"
+#include "src/__support/CPP/atomic.h"
+
+#include <stdint.h>
+
+namespace LIBC_NAMESPACE {
+
+// A lock-free fixed size stack backed by an underlying cpp::array data
+// structure. It supports push and pop operations in a thread safe manner.
+template <typename T, uint32_t CAPACITY> class alignas(16) FixedStack {
+  static_assert(cpp::is_trivially_constructible<T>::value, "Type failure");
+  static_assert(CAPACITY < UINT32_MAX, "Invalid buffer size");
+
+  // The head of the free and used stacks. Represents as a 32-bit index combined
+  // with a 32-bit ABA tag that is updated in a single atomic operation.
+  uint64_t free;
+  uint64_t used;
+
+  // The stack is a linked list of indices into the underlying data
+  cpp::array<uint32_t, CAPACITY> next;
+  cpp::array<T, CAPACITY> data;
+
+  // Get the 32-bit index into the underlying array from the head.
+  static constexpr uint32_t get_node(uint64_t head) {
+    return static_cast<uint32_t>(head & 0xffffffff);
+  }
+
+  // Increment the old ABA tag and merge it into the new index.
+  static constexpr uint64_t make_new_head(uint64_t orig, uint32_t node) {
+    return static_cast<uint64_t>(node) | (((orig >> 32) + 1) << 32);
+  }
+
+  void sleep_briefly() {
+#if defined(LIBC_TARGET_ARCH_IS_NVPTX)
+    if (__nvvm_reflect("__CUDA_ARCH") >= 700)
+      LIBC_INLINE_ASM("nanosleep.u32 32;" ::: "memory");
+#elif defined(LIBC_TARGET_ARCH_IS_AMDGPU)
+    __builtin_amdgcn_s_sleep(1);
+#elif defined(LIBC_TARGET_ARCH_IS_X86)
+    __builtin_ia32_pause();
+#else
+    // Simply do nothing if sleeping isn't supported on this platform.
+#endif
+  }
+
+  // Helper macros for the atomic operations. We cannot use the standard
+  // cpp::atomic helpers because the initializer will no longer be constexpr and
+  // the NVPTX backend cannot currently support all of the atomics.
+#define atomic_load(val, mem_order) __atomic_load_n(val, (int)mem_order)
+#define atomic_cas(val, expected, desired, success_order, failure_order)       \
+  __atomic_compare_exchange_n(val, expected, desired, /*weak=*/true,           \
+                              (int)success_order, (int)failure_order)
+
+  // Attempts to pop data from the given stack by making it point to the next
+  // node. We repeatedly attempt to write to the head using compare-and-swap,
+  // expecting that it has not been changed by any other thread.
+  uint32_t pop_impl(uint64_t *head) {
+    uint64_t orig = atomic_load(head, cpp::MemoryOrder::RELAXED);
+
+    for (;;) {
+      if (get_node(orig) == CAPACITY)
+        return CAPACITY;
+
+      uint64_t node =
+          atomic_load(&next[get_node(orig)], cpp::MemoryOrder::RELAXED);
+      if (atomic_cas(head, &orig, make_new_head(orig, node),
+                     cpp::MemoryOrder::ACQUIRE, cpp::MemoryOrder::RELAXED))
+        break;
+      sleep_briefly();
+    }
+    return get_node(orig);
+  }
+
+  // Attempts to push data to the given stack by making it point to the new
+  // node. We repeatedly attempt to write to the head using compare-and-swap,
+  // expecting that it has not been changed by any other thread.
+  uint32_t push_impl(uint64_t *head, uint32_t node) {
+    uint64_t orig = atomic_load(head, cpp::MemoryOrder::RELAXED);
+    for (;;) {
+      next[node] = get_node(orig);
+      if (atomic_cas(head, &orig, make_new_head(orig, node),
+                     cpp::MemoryOrder::RELEASE, cpp::MemoryOrder::RELAXED))
+        break;
+      sleep_briefly();
+    }
+    return get_node(*head);
+  }
+
+public:
+  // Initialize the free stack to be full and the used stack to be empty. We use
+  // the capacity of the stack as a sentinel value.
+  constexpr FixedStack() : free(0), used(CAPACITY), data{} {
+    for (uint32_t i = 0; i < CAPACITY; ++i)
+      next[i] = i + 1;
+  }
+
+  bool push(const T &val) {
+    uint32_t node = pop_impl(&free);
+    if (node == CAPACITY)
+      return false;
+
+    data[node] = val;
+    push_impl(&used, node);
+    return true;
+  }
+
+  bool pop(T &val) {
+    uint32_t node = pop_impl(&used);
+    if (node == CAPACITY)
+      return false;
+
+    val = data[node];
+    push_impl(&free, node);
+    return true;
+  }
+
+  bool empty() const {
+    return get_node(atomic_load(&used, cpp::MemoryOrder::RELAXED)) == CAPACITY;
+  }
+
+  bool full() const {
+    return get_node(atomic_load(&free, cpp::MemoryOrder::RELAXED)) == CAPACITY;
+  }
+
+#undef atomic_load
+#undef atomic_cas
+};
+
+} // namespace LIBC_NAMESPACE
+
+#endif // LLVM_LIBC_SRC___SUPPORT_FIXEDSTACK_H
diff --git a/libc/test/integration/src/__support/CMakeLists.txt b/libc/test/integration/src/__support/CMakeLists.txt
index 7c853ff10259f5..4eaa8f50269815 100644
--- a/libc/test/integration/src/__support/CMakeLists.txt
+++ b/libc/test/integration/src/__support/CMakeLists.txt
@@ -1 +1,5 @@
+if(EXISTS ${CMAKE_CURRENT_SOURCE_DIR}/${LIBC_TARGET_OS})
+  add_subdirectory(${LIBC_TARGET_OS})
+endif()
+
 add_subdirectory(threads)
diff --git a/libc/test/integration/src/__support/gpu/CMakeLists.txt b/libc/test/integration/src/__support/gpu/CMakeLists.txt
new file mode 100644
index 00000000000000..d83a762cacbd8d
--- /dev/null
+++ b/libc/test/integration/src/__support/gpu/CMakeLists.txt
@@ -0,0 +1,19 @@
+add_custom_target(support-gpu-integration-tests)
+add_dependencies(libc-integration-tests support-gpu-integration-tests)
+
+add_integration_test(
+  support_fixed_stack_test
+  SUITE support-gpu-integration-tests
+  SRCS
+    fixed_stack_test.cpp
+  DEPENDS
+    libc.src.__support.GPU.utils
+    libc.src.__support.fixedstack
+  LOADER_ARGS
+    --blocks-x 2
+    --blocks-y 2
+    --blocks-z 2
+    --threads-x 4
+    --threads-y 4
+    --threads-z 4
+)
diff --git a/libc/test/integration/src/__support/gpu/fixed_stack_test.cpp b/libc/test/integration/src/__support/gpu/fixed_stack_test.cpp
new file mode 100644
index 00000000000000..52fad148023602
--- /dev/null
+++ b/libc/test/integration/src/__support/gpu/fixed_stack_test.cpp
@@ -0,0 +1,75 @@
+//===-- Integration test for the lock-free stack --------------------------===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+
+#include "src/__support/GPU/utils.h"
+#include "src/__support/fixedstack.h"
+#include "test/IntegrationTest/test.h"
+
+using namespace LIBC_NAMESPACE;
+
+void single_thread() {
+  // FIXME: The NVPTX backend cannot handle atomic CAS on a local address space.
+#if defined(LIBC_TARGET_ARCH_IS_AMDGPU)
+  FixedStack<int, 16> local_stack;
+
+  for (int i = 0; i < 16; ++i)
+    EXPECT_TRUE(local_stack.push(i));
+  ASSERT_TRUE(local_stack.full());
+
+  int val;
+  for (int i = 0; i < 16; ++i) {
+    EXPECT_TRUE(local_stack.pop(val));
+    EXPECT_EQ(val, 16 - 1 - i);
+  }
+  ASSERT_TRUE(local_stack.empty());
+#endif
+}
+
+static FixedStack<uint32_t, 2048> global_stack;
+void multiple_threads() {
+  // We need enough space in the stack as threads in flight can temporarily
+  // consume memory before they finish comitting it back to the stack.
+  ASSERT_EQ(gpu::get_num_blocks() * gpu::get_num_threads(), 512);
+
+  uint32_t val;
+  uint32_t num_threads = static_cast<uint32_t>(gpu::get_num_threads());
+  for (int i = 0; i < 256; ++i) {
+    EXPECT_TRUE(global_stack.push(UINT32_MAX))
+    EXPECT_TRUE(global_stack.pop(val))
+    ASSERT_TRUE(val < num_threads || val == UINT32_MAX);
+  }
+
+  EXPECT_TRUE(global_stack.push(static_cast<uint32_t>(gpu::get_thread_id())));
+  EXPECT_TRUE(global_stack.push(static_cast<uint32_t>(gpu::get_thread_id())));
+  EXPECT_TRUE(global_stack.pop(val));
+  ASSERT_TRUE(val < num_threads || val == UINT32_MAX);
+
+  // Fill the rest of the stack with the default value.
+  while (!global_stack.push(UINT32_MAX))
+    ;
+}
+
+// Once all the threads have finished executing check the final state of the
+// stack. Destructors are always run with a single thread on the GPU.
+[[gnu::destructor]] void check_stack() {
+  ASSERT_FALSE(global_stack.empty());
+
+  while (!global_stack.empty()) {
+    uint32_t val;
+    ASSERT_TRUE(global_stack.pop(val));
+    ASSERT_TRUE(val < 64 || val == UINT32_MAX);
+  }
+}
+
+TEST_MAIN(int argc, char **argv, char **envp) {
+  single_thread();
+
+  multiple_threads();
+
+  return 0;
+}
diff --git a/libc/test/src/__support/CMakeLists.txt b/libc/test/src/__support/CMakeLists.txt
index c5634866f839b3..110022935e6b8e 100644
--- a/libc/test/src/__support/CMakeLists.txt
+++ b/libc/test/src/__support/CMakeLists.txt
@@ -122,6 +122,16 @@ add_libc_test(
     libc.src.__support.fixedvector
 )
 
+add_libc_test(
+  fixedstack_test
+  SUITE
+    libc-support-tests
+  SRCS
+    fixedstack_test.cpp
+  DEPENDS
+    libc.src.__support.fixedstack
+)
+
 add_libc_test(
   char_vector_test
   SUITE
diff --git a/libc/test/src/__support/fixedstack_test.cpp b/libc/test/src/__support/fixedstack_test.cpp
new file mode 100644
index 00000000000000..cbbffb91311a29
--- /dev/null
+++ b/libc/test/src/__support/fixedstack_test.cpp
@@ -0,0 +1,26 @@
+//===-- Unittests for FixedStack ------------------------------------------===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+
+#include "src/__support/fixedstack.h"
+#include "test/UnitTest/Test.h"
+
+TEST(LlvmLibcFixedVectorTest, PushAndPop) {
+  static LIBC_NAMESPACE::FixedStack<int, 20> fixed_stack;
+  ASSERT_TRUE(fixed_stack.empty());
+  for (int i = 0; i < 20; i++)
+    ASSERT_TRUE(fixed_stack.push(i));
+  ASSERT_FALSE(fixed_stack.empty());
+  ASSERT_FALSE(fixed_stack.push(123));
+  int val;
+  for (int i = 20; i > 0; --i) {
+    ASSERT_TRUE(fixed_stack.pop(val));
+    ASSERT_EQ(val, i - 1);
+  }
+  ASSERT_FALSE(fixed_stack.pop(val));
+  ASSERT_TRUE(fixed_stack.empty());
+}

jhuber6 added a commit to jhuber6/llvm-project that referenced this pull request Feb 26, 2024
Summary:
This function was never marked at supported because it was fundamentally
broken when called with multiple threads. The patch in
llvm#83026 introduces a lock-free
stack that can be used to correctly handle enqueuing callbacks from
multiple threads. Although the previous interface tried to provide a
consistent API, this was not feasible with the needs for a lock-free
stack so I have elected to just use ifdefs. The size is fixed to
whatever we use for testing, which currently amounts to about 8KiB
dedicated for this thing, which isn't enough to be concenred about.

Depends on llvm#83026
@JonChesterfield
Copy link
Collaborator

JonChesterfield commented Feb 26, 2024

ABA tag plus handwavy arguments that it's unlikely to overflow is unsound with an unfair scheduler. Failure mode goes:

thread A reads the integers, plans to commit to the first value in the stack, gets suspended
other threads do things with the stack for a while - uint32_t puts that in the ballpark of some hours
thread A gets rescheduled around the time the stack happens to have an aba counter that matches what it used to have

Bang, game over.

Haven't read the rest of the patch. CAS + ABA + unfair scheduler => broken.

Worth noting that the CAS loop will fail whenever the index (or the counter) is not the expected value which I'd expect to interact badly with the scheduler to increase the probability of failure.

#elif defined(LIBC_TARGET_ARCH_IS_X86)
__builtin_ia32_pause();
#else
// Simply do nothing if sleeping isn't supported on this platform.
Copy link
Contributor

@SchrodingerZhu SchrodingerZhu Feb 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I remember armv8 does have yield/isb assembly.

rust-lang/rust@c064b65

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be a good utility function overall, since you generally want something like this for every atomic spin loop.

@jhuber6
Copy link
Contributor Author

jhuber6 commented Feb 26, 2024

ABA tag plus handwavy arguments that it's unlikely to overflow is unsound with an unfair scheduler. Failure mode goes:

thread A reads the integers, plans to commit to the first value in the stack, gets suspended other threads do things with the stack for a while - uint32_t puts that in the ballpark of some hours thread A gets rescheduled around the time the stack happens to have an aba counter that matches what it used to have

Bang, game over.

Haven't read the rest of the patch. CAS + ABA + unfair scheduler => broken.

Worth noting that the CAS loop will fail whenever the index (or the counter) is not the expected value which I'd expect to interact badly with the scheduler to increase the probability of failure.

I am aware that this has a failure mode when the ABA counter overflows back to its original value. This requires that the thread is suspended for roughly 4.3 billion stack manipulations and that the index has returned to its original value when it is resumed. There is nothing I am planning on using this structure for which will even come close to that many manipulations, see #83037. And even if we were to exceed this amount we are still working on probability that could win you the lottery. So, I'm personally not concerned with this being a problem.

@JonChesterfield
Copy link
Collaborator

JonChesterfield commented Feb 26, 2024

I am concerned with shipping code that has a known data corruption race condition that hits with period on the order of days. The initial use may be light enough that the failure modes are latent, it doesn't follow that the struct won't be used for other things in the future. Especially given the application domain of HPC / AI where jobs are at credible risk of running for multiple hours at a time.

I think there's a known fetch_add stack in the literature. There's a queue https://dl.acm.org/doi/10.1145/2851141.2851168 that might be adaptable.

I'd suggest a structure more like

template <typename T, size_t N>
struct stack
{
   T value[N]; 
   bitmap<N> in_use;
};

Let value[0] be the first value written.

Push is find-first-set to choose a candidate slow, then fetch_or to set that bit. Success when fetch_or returned a zero in the corresponding position, failure means try push again. Out of space when all bits are in use.

Pop is fetch_and to clear the corresponding bit.

For atexit you don't need a stack - you need push() and iterate() which is simpler than pop(), a pointer and fetch_add is sufficient.

The lottery analogy is not persuasive - the probability of this failure mode is far higher than a lottery and you can gamble a billion times a second instead of once a week.

@SchrodingerZhu
Copy link
Contributor

SchrodingerZhu commented Feb 26, 2024

Given that the stack is of a fixed size (i.e. there cannot be too many different tags). Is ABA still a concern?


I am not exactly sure about the GPU's assumption. On CPU ends, the dtor list used by atexit should be MPSC and the consumptions can only happen at the end of the program without concurrency.
There is no concurrent reclamation of the stack cell at all.

I was wrong, the exit call can be initiated at multiple threads concurrently.

@jhuber6
Copy link
Contributor Author

jhuber6 commented Feb 26, 2024

Given that the stack is of a fixed size (i.e. there cannot be too many different tags). Is ABA still a concern?

Theoretically you can push(); pop(); over and over again, but I said it's unlikely.

I'd suggest a structure more like

template <typename T, size_t N>
struct stack
{
   T value[N]; 
   bitmap<N> in_use;
};

This works for the use-case of a fixed size buffer, but it is not immediately obvious how to get stack semantics from this, as you would need some extra information to indicate the ordering that would then require similar operations to the one in this implementation.

The lottery analogy is not persuasive - the probability of this failure mode is far higher than a lottery and you can gamble a billion times a second instead of once a week.

If we assume that the thread being woken up after N stack operations is evenly distributed then this happening is 1 in 4294967295 if I'm not mistaken.

@JonChesterfield
Copy link
Collaborator

JonChesterfield commented Feb 26, 2024

You don't need extra information to get stack semantics - the next value to pop is the one allocated at the highest index.

The thread being woken up is not remotely evenly distributed - in the best case it's a function of what the threads are doing, in an opencl scheduling model it's arbitrarily far from evenly distributed.

@jhuber6
Copy link
Contributor Author

jhuber6 commented Feb 26, 2024

I have updated the implementation to use 44-bites of ABA and 20-bits of index. This allows for a 1 MB fixed size buffer and using generous timing would take 3/4th of a year of continuous execution to overflow with an even smaller chance for them to conflict.

jhuber6 added a commit to jhuber6/llvm-project that referenced this pull request Feb 26, 2024
Summary:
The llvm#83026 patch has another
use for this function. Additionally add support for the Arm instruction
barrier if this is ever used by other targets.
jhuber6 added a commit that referenced this pull request Feb 26, 2024
Summary:
The #83026 patch has another
use for this function. Additionally add support for the Arm instruction
barrier if this is ever used by other targets.
jhuber6 added a commit to jhuber6/llvm-project that referenced this pull request Feb 27, 2024
Summary:
This function was never marked at supported because it was fundamentally
broken when called with multiple threads. The patch in
llvm#83026 introduces a lock-free
stack that can be used to correctly handle enqueuing callbacks from
multiple threads. Although the previous interface tried to provide a
consistent API, this was not feasible with the needs for a lock-free
stack so I have elected to just use ifdefs. The size is fixed to
whatever we use for testing, which currently amounts to about 8KiB
dedicated for this thing, which isn't enough to be concenred about.

Depends on llvm#83026
jhuber6 added a commit to jhuber6/llvm-project that referenced this pull request Feb 27, 2024
Summary:
This function was never marked at supported because it was fundamentally
broken when called with multiple threads. The patch in
llvm#83026 introduces a lock-free
stack that can be used to correctly handle enqueuing callbacks from
multiple threads. Although the previous interface tried to provide a
consistent API, this was not feasible with the needs for a lock-free
stack so I have elected to just use ifdefs. The size is fixed to
whatever we use for testing, which currently amounts to about 8KiB
dedicated for this thing, which isn't enough to be concenred about.

Depends on llvm#83026
// A lock-free fixed size stack backed by an underlying cpp::array data
// structure. It supports push and pop operations in a thread safe manner.
template <typename T, uint32_t CAPACITY> class alignas(16) FixedStack {
static_assert(CAPACITY < 1024 * 1024, "Invalid buffer size");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: add a comment explaining why 1024*1024 is the max size

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Unfortunately I don't know of any other use-cases for a strict stack in libc. The other uses of atexit just get to claim the whole thing with a mutex so they don't need to worry about this and just let it append.

Summary:
This patch implements a `FixedStack` which represents a lock-free stack
implemented using a fixed-size memory buffer. The utility for this class
is to implement a data structure that the GPU implementation can use
incases where a mutex is normally used. We cannot implement a
general-purpose mutex on the GPU due to the lack of a guaranteed fair
thread scheduler.

This lock free stack is implemented as a pair of 'pointers' to a used
and free stack. The pointers here are simple 32-bit indexes into an
underlying fixed-size memory buffer. The free stack is initialized to
point to the entire buffer, i.e. `1, 2, 3, ..., N` with N being used as
a sentinel value. To perform a push operation, we pop a node off of the
free stack and then push it into the used stack and vice-versa for a pop
operation.

The underlying pop implementation relies on continually trying to update
the head pointer to the next node using atomic CAS. The CAS loop will
repeat until it reads the next pointer successfully and the head pointer
has not changed.

The underlying push ipmlementation relies on contiually trying to update
the head pointer to the new node. The CAS loop will reap until we write
to the head pointer and it has not changed.

Both of these implementations rely on detecting whether or not the head
pointer has changed. Simply using indexes we are succeptiable to the ABA
problem as the stack could have been pushed and popped until we are back
to the same index and we have not noticed. For this reason, the 'head'
pointer is augmented with a 32-bit ABA that increments each time it is
updated. This allows the head update to be done in a single 64-bit
atomic update which is supported by hardware.
@jhuber6
Copy link
Contributor Author

jhuber6 commented Mar 11, 2024

Any opinions on this? It's a lot of work for a pretty niche use-case. I could potentially simplify it by just treating this like a bump pointer and flushing it once at the call to atexit, ignoring any potential incoming calls. However I already have this version working well so it'd be more work in general.

@nickdesaulniers
Copy link
Member

What's the latest on this PR?

@jhuber6
Copy link
Contributor Author

jhuber6 commented Oct 10, 2024

What's the latest on this PR?

The implementation works, the only use-case is a GPU implementation of calling atexit with multiple threads so I don't know how needed it is.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants