Skip to content

[Offload][NFC] Fix typos discovered by codespell #125119

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jan 31, 2025

Conversation

cclauss
Copy link
Contributor

@cclauss cclauss commented Jan 30, 2025

https://github.com/codespell-project/codespell

% codespell --ignore-words-list=archtype,hsa,identty,inout,iself,nd,te,ths,vertexes --write-changes

@llvmbot
Copy link
Member

llvmbot commented Jan 30, 2025

@llvm/pr-subscribers-backend-amdgpu

@llvm/pr-subscribers-offload

Author: Christian Clauss (cclauss)

Changes

https://github.com/codespell-project/codespell

% codespell --ignore-words-list=archtype,hsa,identty,inout,iself,nd,te,ths,vertexes --write-changes


Patch is 52.09 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/125119.diff

46 Files Affected:

  • (modified) offload/DeviceRTL/include/Configuration.h (+1-1)
  • (modified) offload/DeviceRTL/include/Mapping.h (+1-1)
  • (modified) offload/DeviceRTL/include/State.h (+3-3)
  • (modified) offload/DeviceRTL/include/Synchronization.h (+1-1)
  • (modified) offload/DeviceRTL/src/Configuration.cpp (+1-1)
  • (modified) offload/DeviceRTL/src/Misc.cpp (+1-1)
  • (modified) offload/DeviceRTL/src/Reduction.cpp (+2-2)
  • (modified) offload/DeviceRTL/src/Synchronization.cpp (+4-4)
  • (modified) offload/DeviceRTL/src/Workshare.cpp (+3-3)
  • (modified) offload/include/OpenMP/OMPT/Callback.h (+1-1)
  • (modified) offload/include/PluginManager.h (+2-2)
  • (modified) offload/include/device.h (+1-1)
  • (modified) offload/include/omptarget.h (+1-1)
  • (modified) offload/liboffload/API/APIDefs.td (+1-1)
  • (modified) offload/liboffload/API/README.md (+1-1)
  • (modified) offload/liboffload/src/OffloadLib.cpp (+1-1)
  • (modified) offload/plugins-nextgen/amdgpu/src/rtl.cpp (+24-24)
  • (modified) offload/plugins-nextgen/common/CMakeLists.txt (+1-1)
  • (modified) offload/plugins-nextgen/common/include/GlobalHandler.h (+2-2)
  • (modified) offload/plugins-nextgen/common/include/JIT.h (+1-1)
  • (modified) offload/plugins-nextgen/common/include/PluginInterface.h (+9-9)
  • (modified) offload/plugins-nextgen/common/include/RPC.h (+2-2)
  • (modified) offload/plugins-nextgen/common/src/GlobalHandler.cpp (+1-1)
  • (modified) offload/plugins-nextgen/common/src/PluginInterface.cpp (+7-7)
  • (modified) offload/plugins-nextgen/cuda/src/rtl.cpp (+5-5)
  • (modified) offload/plugins-nextgen/host/dynamic_ffi/ffi.h (+1-1)
  • (modified) offload/src/OpenMP/API.cpp (+2-2)
  • (modified) offload/src/OpenMP/Mapping.cpp (+2-2)
  • (modified) offload/src/PluginManager.cpp (+2-2)
  • (modified) offload/src/device.cpp (+2-2)
  • (modified) offload/src/interface.cpp (+3-3)
  • (modified) offload/src/omptarget.cpp (+2-2)
  • (modified) offload/test/api/omp_target_memcpy_async1.c (+1-1)
  • (modified) offload/test/mapping/target_uses_allocator.c (+1-1)
  • (modified) offload/test/offloading/fortran/dump_map_tables.f90 (+1-1)
  • (modified) offload/test/offloading/fortran/implicit-record-field-mapping.f90 (+1-1)
  • (modified) offload/test/offloading/fortran/local-descriptor-map-regress.f90 (+2-2)
  • (modified) offload/test/offloading/fortran/target-map-derived-type-full-1.f90 (+1-1)
  • (modified) offload/test/offloading/fortran/target-map-derived-type-full-implicit-1.f90 (+1-1)
  • (modified) offload/test/offloading/fortran/target-map-derived-type-full-implicit-2.f90 (+1-1)
  • (modified) offload/test/offloading/parallel_target_teams_reduction_max.cpp (+1-1)
  • (modified) offload/test/offloading/parallel_target_teams_reduction_min.cpp (+1-1)
  • (modified) offload/test/offloading/struct_mapping_with_pointers.cpp (+1-1)
  • (modified) offload/test/unified_shared_memory/associate_ptr.c (+1-1)
  • (modified) offload/test/unified_shared_memory/close_member.c (+1-1)
  • (modified) offload/tools/kernelreplay/llvm-omp-kernel-replay.cpp (+1-1)
diff --git a/offload/DeviceRTL/include/Configuration.h b/offload/DeviceRTL/include/Configuration.h
index f8b7a6c3c6c9da..95408933dd8653 100644
--- a/offload/DeviceRTL/include/Configuration.h
+++ b/offload/DeviceRTL/include/Configuration.h
@@ -27,7 +27,7 @@ uint32_t getNumDevices();
 /// Return the device number in the system for omp_get_device_num.
 uint32_t getDeviceNum();
 
-/// Return the user choosen debug level.
+/// Return the user chosen debug level.
 uint32_t getDebugKind();
 
 /// Return if teams oversubscription is assumed
diff --git a/offload/DeviceRTL/include/Mapping.h b/offload/DeviceRTL/include/Mapping.h
index 2fb87abe5418c0..2217eb7616b386 100644
--- a/offload/DeviceRTL/include/Mapping.h
+++ b/offload/DeviceRTL/include/Mapping.h
@@ -63,7 +63,7 @@ LaneMaskTy activemask();
 /// Return a mask describing all threads with a smaller Id in the warp.
 LaneMaskTy lanemaskLT();
 
-/// Return a mask describing all threads with a larget Id in the warp.
+/// Return a mask describing all threads with a larger Id in the warp.
 LaneMaskTy lanemaskGT();
 
 /// Return the thread Id in the warp, in [0, getWarpSize()).
diff --git a/offload/DeviceRTL/include/State.h b/offload/DeviceRTL/include/State.h
index c487ff29680faa..f0500c1083d7f4 100644
--- a/offload/DeviceRTL/include/State.h
+++ b/offload/DeviceRTL/include/State.h
@@ -33,7 +33,7 @@ namespace memory {
 /// Note: See the restrictions on __kmpc_alloc_shared for proper usage.
 void *allocShared(uint64_t Size, const char *Reason);
 
-/// Free \p Ptr, alloated via allocShared, for \p Reason.
+/// Free \p Ptr, allocated via allocShared, for \p Reason.
 ///
 /// Note: See the restrictions on __kmpc_free_shared for proper usage.
 void freeShared(void *Ptr, uint64_t Bytes, const char *Reason);
@@ -44,7 +44,7 @@ void *allocGlobal(uint64_t Size, const char *Reason);
 /// Return a pointer to the dynamic shared memory buffer.
 void *getDynamicBuffer();
 
-/// Free \p Ptr, alloated via allocGlobal, for \p Reason.
+/// Free \p Ptr, allocated via allocGlobal, for \p Reason.
 void freeGlobal(void *Ptr, const char *Reason);
 
 } // namespace memory
@@ -365,7 +365,7 @@ inline state::Value<uint32_t, state::VK_Level> Level;
 /// The `active-level` describes which of the parallel level counted with the
 /// `level-var` is active. There can only be one.
 ///
-/// active-level-var is 1, if ActiveLevelVar is not 0, otherweise it is 0.
+/// active-level-var is 1, if ActiveLevelVar is not 0, otherwise it is 0.
 inline state::Value<uint32_t, state::VK_ActiveLevel> ActiveLevel;
 
 /// TODO
diff --git a/offload/DeviceRTL/include/Synchronization.h b/offload/DeviceRTL/include/Synchronization.h
index 5a789441b9d35f..5045d3c2c99a33 100644
--- a/offload/DeviceRTL/include/Synchronization.h
+++ b/offload/DeviceRTL/include/Synchronization.h
@@ -22,7 +22,7 @@ namespace atomic {
 
 enum OrderingTy {
   relaxed = __ATOMIC_RELAXED,
-  aquire = __ATOMIC_ACQUIRE,
+  acquire = __ATOMIC_ACQUIRE,
   release = __ATOMIC_RELEASE,
   acq_rel = __ATOMIC_ACQ_REL,
   seq_cst = __ATOMIC_SEQ_CST,
diff --git a/offload/DeviceRTL/src/Configuration.cpp b/offload/DeviceRTL/src/Configuration.cpp
index 9e14c203d4a04e..0b488b8034178d 100644
--- a/offload/DeviceRTL/src/Configuration.cpp
+++ b/offload/DeviceRTL/src/Configuration.cpp
@@ -27,7 +27,7 @@ using namespace ompx;
     0;
 [[gnu::weak]] extern const uint32_t __omp_rtl_assume_teams_oversubscription = 0;
 
-// This variable should be visibile to the plugin so we override the default
+// This variable should be visible to the plugin so we override the default
 // hidden visibility.
 [[gnu::used, gnu::retain, gnu::weak,
   gnu::visibility("protected")]] DeviceEnvironmentTy
diff --git a/offload/DeviceRTL/src/Misc.cpp b/offload/DeviceRTL/src/Misc.cpp
index a6660d6853e477..010474b1c4a74d 100644
--- a/offload/DeviceRTL/src/Misc.cpp
+++ b/offload/DeviceRTL/src/Misc.cpp
@@ -33,7 +33,7 @@ double getWTime();
 
 double getWTick() {
   // The number of ticks per second for the AMDGPU clock varies by card and can
-  // only be retrived by querying the driver. We rely on the device environment
+  // only be retrieved by querying the driver. We rely on the device environment
   // to inform us what the proper frequency is.
   return 1.0 / config::getClockFrequency();
 }
diff --git a/offload/DeviceRTL/src/Reduction.cpp b/offload/DeviceRTL/src/Reduction.cpp
index d3b4528401953c..382f6cf392e91a 100644
--- a/offload/DeviceRTL/src/Reduction.cpp
+++ b/offload/DeviceRTL/src/Reduction.cpp
@@ -206,7 +206,7 @@ int32_t __kmpc_nvptx_teams_reduce_nowait_v2(
   // to the number of slots in the buffer.
   bool IsMaster = (ThreadId == 0);
   while (IsMaster) {
-    Bound = atomic::load(&IterCnt, atomic::aquire);
+    Bound = atomic::load(&IterCnt, atomic::acquire);
     if (TeamId < Bound + num_of_records)
       break;
   }
@@ -259,7 +259,7 @@ int32_t __kmpc_nvptx_teams_reduce_nowait_v2(
   unsigned NumRecs = kmpcMin(NumTeams, uint32_t(num_of_records));
   if (ChunkTeamCount == NumTeams - Bound - 1) {
     // Ensure we see the global memory writes by other teams
-    fence::kernel(atomic::aquire);
+    fence::kernel(atomic::acquire);
 
     //
     // Last team processing.
diff --git a/offload/DeviceRTL/src/Synchronization.cpp b/offload/DeviceRTL/src/Synchronization.cpp
index e0e277928fa910..b09d4801faa012 100644
--- a/offload/DeviceRTL/src/Synchronization.cpp
+++ b/offload/DeviceRTL/src/Synchronization.cpp
@@ -84,7 +84,7 @@ uint32_t atomicInc(uint32_t *A, uint32_t V, atomic::OrderingTy Ordering,
   default:
     __builtin_unreachable();
     Case(atomic::relaxed);
-    Case(atomic::aquire);
+    Case(atomic::acquire);
     Case(atomic::release);
     Case(atomic::acq_rel);
     Case(atomic::seq_cst);
@@ -107,7 +107,7 @@ void namedBarrier() {
   uint32_t WarpSize = mapping::getWarpSize();
   uint32_t NumWaves = NumThreads / WarpSize;
 
-  fence::team(atomic::aquire);
+  fence::team(atomic::acquire);
 
   // named barrier implementation for amdgcn.
   // Uses two 16 bit unsigned counters. One for the number of waves to have
@@ -172,7 +172,7 @@ void syncThreads(atomic::OrderingTy Ordering) {
   __builtin_amdgcn_s_barrier();
 
   if (Ordering != atomic::relaxed)
-    fenceTeam(Ordering == atomic::acq_rel ? atomic::aquire : atomic::seq_cst);
+    fenceTeam(Ordering == atomic::acq_rel ? atomic::acquire : atomic::seq_cst);
 }
 void syncThreadsAligned(atomic::OrderingTy Ordering) { syncThreads(Ordering); }
 
@@ -198,7 +198,7 @@ void setCriticalLock(omp_lock_t *Lock) {
         !cas((uint32_t *)Lock, UNSET, SET, atomic::relaxed, atomic::relaxed)) {
       __builtin_amdgcn_s_sleep(32);
     }
-    fenceKernel(atomic::aquire);
+    fenceKernel(atomic::acquire);
   }
 }
 
diff --git a/offload/DeviceRTL/src/Workshare.cpp b/offload/DeviceRTL/src/Workshare.cpp
index ad60e66548be90..cb83f1b670c9ee 100644
--- a/offload/DeviceRTL/src/Workshare.cpp
+++ b/offload/DeviceRTL/src/Workshare.cpp
@@ -79,7 +79,7 @@ template <typename T, typename ST> struct omptarget_nvptx_LoopSupport {
     lb = lb + entityId * chunk;
     T inputUb = ub;
     ub = lb + chunk - 1; // Clang uses i <= ub
-    // Say ub' is the begining of the last chunk. Then who ever has a
+    // Say ub' is the beginning of the last chunk. Then who ever has a
     // lower bound plus a multiple of the increment equal to ub' is
     // the last one.
     T beginingLastChunk = inputUb - (inputUb % chunk);
@@ -806,7 +806,7 @@ template <typename Ty> class StaticLoopChunker {
                                 NumIters, OneIterationPerThread);
   }
 
-  /// Worksharing `distrbute`-loop.
+  /// Worksharing `distribute`-loop.
   static void Distribute(IdentTy *Loc, void (*LoopBody)(Ty, void *), void *Arg,
                          Ty NumIters, Ty BlockChunk) {
     ASSERT(icv::Level == 0, "Bad distribute");
@@ -853,7 +853,7 @@ template <typename Ty> class StaticLoopChunker {
     ASSERT(state::ParallelTeamSize == 1, "Bad distribute");
   }
 
-  /// Worksharing `distrbute parallel for`-loop.
+  /// Worksharing `distribute parallel for`-loop.
   static void DistributeFor(IdentTy *Loc, void (*LoopBody)(Ty, void *),
                             void *Arg, Ty NumIters, Ty NumThreads,
                             Ty BlockChunk, Ty ThreadChunk) {
diff --git a/offload/include/OpenMP/OMPT/Callback.h b/offload/include/OpenMP/OMPT/Callback.h
index 68cb43745eb1f8..9d545c643223f5 100644
--- a/offload/include/OpenMP/OMPT/Callback.h
+++ b/offload/include/OpenMP/OMPT/Callback.h
@@ -56,7 +56,7 @@ FOREACH_OMPT_EMI_EVENT(declareOmptCallback)
 
 /// This function will call an OpenMP API function. Which in turn will lookup a
 /// given enum value of type \p ompt_callbacks_t and copy the address of the
-/// corresponding callback funtion into the provided pointer.
+/// corresponding callback function into the provided pointer.
 /// The pointer to the runtime function is passed during 'initializeLibrary'.
 /// \p which the enum value of the requested callback function
 /// \p callback the destination pointer where the address shall be copied
diff --git a/offload/include/PluginManager.h b/offload/include/PluginManager.h
index f4febac69c45eb..ec3adadf0819b0 100644
--- a/offload/include/PluginManager.h
+++ b/offload/include/PluginManager.h
@@ -77,7 +77,7 @@ struct PluginManager {
   /// Iterate over all device images registered with this plugin.
   auto deviceImages() { return llvm::make_pointee_range(DeviceImages); }
 
-  /// Translation table retreived from the binary
+  /// Translation table retrieved from the binary
   HostEntriesBeginToTransTableTy HostEntriesBeginToTransTable;
   std::mutex TrlTblMtx; ///< For Translation Table
   /// Host offload entries in order of image registration
@@ -171,7 +171,7 @@ struct PluginManager {
   /// Devices associated with plugins, accesses to the container are exclusive.
   ProtectedObj<DeviceContainerTy> Devices;
 
-  /// References to upgraded legacy offloading entires.
+  /// References to upgraded legacy offloading entries.
   std::list<llvm::SmallVector<llvm::offloading::EntryTy, 0>> LegacyEntries;
   std::list<llvm::SmallVector<__tgt_device_image, 0>> LegacyImages;
   llvm::DenseMap<__tgt_bin_desc *, __tgt_bin_desc> UpgradedDescriptors;
diff --git a/offload/include/device.h b/offload/include/device.h
index 3132d35b7b38c8..f4b10abbaa3fda 100644
--- a/offload/include/device.h
+++ b/offload/include/device.h
@@ -134,7 +134,7 @@ struct DeviceTy {
   int32_t recordEvent(void *Event, AsyncInfoTy &AsyncInfo);
 
   /// Wait for an event. This function can be blocking or non-blocking,
-  /// depending on the implmentation. It is expected to set a dependence on the
+  /// depending on the implementation. It is expected to set a dependence on the
   /// event such that corresponding operations shall only start once the event
   /// is fulfilled.
   int32_t waitEvent(void *Event, AsyncInfoTy &AsyncInfo);
diff --git a/offload/include/omptarget.h b/offload/include/omptarget.h
index 2b6445e9fbe550..6971780c7bdb5a 100644
--- a/offload/include/omptarget.h
+++ b/offload/include/omptarget.h
@@ -42,7 +42,7 @@ enum __tgt_target_return_t : int {
   OMP_TGT_SUCCESS = 0,
   /// offload may not execute on the requested target device
   /// this scenario can be caused by the device not available or unsupported
-  /// as described in the Execution Model in the specifcation
+  /// as described in the Execution Model in the specification
   /// this status may not be used for target device execution failure
   /// which should be handled internally in libomptarget
   OMP_TGT_FAIL = ~0
diff --git a/offload/liboffload/API/APIDefs.td b/offload/liboffload/API/APIDefs.td
index 60c1b85d26911e..cee4adea1d9f64 100644
--- a/offload/liboffload/API/APIDefs.td
+++ b/offload/liboffload/API/APIDefs.td
@@ -36,7 +36,7 @@ class IsPointerType<string Type> {
   bit ret = !ne(!find(Type, "*", !sub(!size(Type), 1)), -1);
 }
 
-// Describes the valid range of a pointer parameter that reperesents an array
+// Describes the valid range of a pointer parameter that represents an array
 class Range<string Begin, string End> {
   string begin = Begin;
   string end = End;
diff --git a/offload/liboffload/API/README.md b/offload/liboffload/API/README.md
index 38a055811b2d00..b59ac2782a2be5 100644
--- a/offload/liboffload/API/README.md
+++ b/offload/liboffload/API/README.md
@@ -62,7 +62,7 @@ which preserves ABI compatibility with C.
 Represents a C-style enum. Contains a list of `etor` values, which have a name
 and description.
 
-A `TaggedEtor` record type also exists which addtionally takes a type. This type
+A `TaggedEtor` record type also exists which additionally takes a type. This type
 is used when the enum is used as a parameter to a function with a type-tagged
 function parameter (e.g. `olGetDeviceInfo`).
 
diff --git a/offload/liboffload/src/OffloadLib.cpp b/offload/liboffload/src/OffloadLib.cpp
index 37876713212c98..70e1ce1f84d831 100644
--- a/offload/liboffload/src/OffloadLib.cpp
+++ b/offload/liboffload/src/OffloadLib.cpp
@@ -36,7 +36,7 @@ OffloadConfig &offloadConfig() {
   return Config;
 }
 
-// Pull in the declarations for the implementation funtions. The actual entry
+// Pull in the declarations for the implementation functions. The actual entry
 // points in this file wrap these.
 #include "OffloadImplFuncDecls.inc"
 
diff --git a/offload/plugins-nextgen/amdgpu/src/rtl.cpp b/offload/plugins-nextgen/amdgpu/src/rtl.cpp
index 6fc75ac1542894..92184ba796dbd8 100644
--- a/offload/plugins-nextgen/amdgpu/src/rtl.cpp
+++ b/offload/plugins-nextgen/amdgpu/src/rtl.cpp
@@ -436,7 +436,7 @@ struct AMDGPUMemoryManagerTy : public DeviceAllocatorTy {
   /// have more previously allocated buffers.
   void *allocate(size_t Size, void *HstPtr, TargetAllocTy Kind) override;
 
-  /// Deallocation callack that will be called by the memory manager.
+  /// Deallocation callback that will be called by the memory manager.
   int free(void *TgtPtr, TargetAllocTy Kind) override {
     if (auto Err = MemoryPool->deallocate(TgtPtr)) {
       consumeError(std::move(Err));
@@ -493,7 +493,7 @@ struct AMDGPUDeviceImageTy : public DeviceImageTy {
   }
 
 private:
-  /// The exectuable loaded on the agent.
+  /// The executable loaded on the agent.
   hsa_executable_t Executable;
   StringMap<offloading::amdgpu::AMDGPUKernelMetaData> KernelInfoMap;
   uint16_t ELFABIVersion;
@@ -876,7 +876,7 @@ struct AMDGPUQueueTy {
     hsa_signal_store_relaxed(Queue->doorbell_signal, PacketId);
   }
 
-  /// Callack that will be called when an error is detected on the HSA queue.
+  /// Callback that will be called when an error is detected on the HSA queue.
   static void callbackError(hsa_status_t Status, hsa_queue_t *Source,
                             void *Data);
 
@@ -932,7 +932,7 @@ struct AMDGPUStreamTy {
   /// operation's output signal is set to the consumed slot's signal. If there
   /// is a previous asynchronous operation on the previous slot, the HSA async
   /// operation's input signal is set to the signal of the previous slot. This
-  /// way, we obtain a chain of dependant async operations. The action is a
+  /// way, we obtain a chain of dependent async operations. The action is a
   /// function that will be executed eventually after the operation is
   /// completed, e.g., for releasing a buffer.
   struct StreamSlotTy {
@@ -1055,10 +1055,10 @@ struct AMDGPUStreamTy {
   /// Timeout hint for HSA actively waiting for signal value to change
   const uint64_t StreamBusyWaitMicroseconds;
 
-  /// Indicate to spread data transfers across all avilable SDMAs
+  /// Indicate to spread data transfers across all available SDMAs
   bool UseMultipleSdmaEngines;
 
-  /// Return the current number of asychronous operations on the stream.
+  /// Return the current number of asynchronous operations on the stream.
   uint32_t size() const { return NextSlot; }
 
   /// Return the last valid slot on the stream.
@@ -1155,12 +1155,12 @@ struct AMDGPUStreamTy {
     // changes on the slot.
     std::atomic_thread_fence(std::memory_order_acquire);
 
-    // Peform the operation.
+    // Perform the operation.
     if (auto Err = Slot->performAction())
-      FATAL_MESSAGE(1, "Error peforming post action: %s",
+      FATAL_MESSAGE(1, "Error performing post action: %s",
                     toString(std::move(Err)).data());
 
-    // Signal the output signal to notify the asycnhronous operation finalized.
+    // Signal the output signal to notify the asynchronous operation finalized.
     Slot->Signal->signal();
 
     // Unregister callback.
@@ -1183,9 +1183,9 @@ struct AMDGPUStreamTy {
   /// action. There are two kinds of memory buffers:
   ///   1. For kernel arguments. This buffer can be freed after receiving the
   ///   kernel completion signal.
-  ///   2. For H2D tranfers that need pinned memory space for staging. This
+  ///   2. For H2D transfers that need pinned memory space for staging. This
   ///   buffer can be freed after receiving the transfer completion signal.
-  ///   3. For D2H tranfers that need pinned memory space for staging. This
+  ///   3. For D2H transfers that need pinned memory space for staging. This
   ///   buffer cannot be freed after receiving the transfer completion signal
   ///   because of the following asynchronous H2H callback.
   ///      For this reason, This action can only be taken at
@@ -1222,7 +1222,7 @@ struct AMDGPUStreamTy {
   /// Create an empty stream associated with a specific device.
   AMDGPUStreamTy(AMDGPUDeviceTy &Device);
 
-  /// Intialize the stream's signals.
+  /// Initialize the stream's signals.
   Error init() { return Plugin::success(); }
 
   /// Deinitialize the stream's signals.
@@ -1312,7 +1312,7 @@ struct AMDGPUStreamTy {
   /// Push an asynchronous memory copy device-to-host involving an unpinned
   /// memory buffer. The operation consists of a two-step copy from the
   /// device buffer to an intermediate pinned host buffer, and then, to a
-  /// unpinned host buffer. Both operations are asynchronous and dependant.
+  /// unpinned host buffer. Both operations are asynchronous and dependent.
   /// The intermediate pinned buffer will be released to the specified memory
   /// manager once the operation completes.
   Error pushMemoryCopyD2HAsync(void *Dst, const void *Src, void *Inter,
@@ -1374,7 +1374,7 @@ struct AMDGPUStreamTy {
   /// Push an asynchronous memory copy host-to-device involving an unpinned
   /// memory buffer. The operation consists of a two-step copy from the
   /// unpinned host buffer to an intermediate pinned host buffer, and then, to
-  /// the pinned host buffer. Both operations are asynchronous and dependant.
+  /// the pinned host buffer. Both operations are asynchronous and dependent.
   /// The intermediate pinned buffer will be released to the specified memory
   /// manager once the operation completes.
   Error pushMemoryCopyH2DAsync(void *Dst, const void *Src, void *Inter,
@@ -1672,7 +1672,7 @@ struct AMDGPUStreamManagerTy final
   }
 
 private:
-  /// Search for and assign an prefereably idle queue to the given Stream. If
+  /// Search for and assign an preferably idle queue to the given Stream. If
   /// there is no queue without current users, choose the queue with the lowest
   /// user count. If utilization is ignored: use round robin selection.
   inline Error assignNextQueue(AMDGPUStreamTy *Stream) {
@@ -1856,13 +1856,13 @@ struct AMDHostDeviceTy : public AMDGenericDeviceTy {
   /// Get a memory pool for fine-grained allocations.
   AMDGPUMemoryPoolTy &getFineGrainedMemoryPool() {
     assert(!FineGrainedMemoryPools.empty() && "No fine-grained mempool");
-    // Retrive any memory pool.
+    // Retrieve any memory pool.
     return *FineGrainedMemoryPools[0];
   }
 
   AMDGPUMemoryPoolTy &getCoarseGrainedMemoryPool() {
     assert(!CoarseGrainedMemoryPools.empty() && "No coarse-grained mempool");
-    // Retrive any memory pool.
+    // Retrieve any memory pool.
     return *CoarseGrainedMemoryPools[0];
   }
 
@@ -1937,7 +1937,7 @@ struct AMDGPUDeviceTy : public GenericDeviceTy, AMDGenericDeviceTy {
                          ClockFrequency) != HSA_STATUS_SUCCESS)
       ClockFrequency = 0;
 
-    // Load the grid values dependending on the wavefront.
+    // Load the grid values depending on the wavefront.
     if (WavefrontSize == 32)
       GridValues = getAMDGPUGridValues<32>();
     else if (WavefrontSize == 64)
@@ -2097,7 +2097,7 @@ struct AMDGPUDeviceTy : public GenericDeviceTy, AMDGenericDev...
[truncated]

Copy link

github-actions bot commented Jan 30, 2025

✅ With the latest revision this PR passed the C/C++ code formatter.

Copy link
Contributor

@jhuber6 jhuber6 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks

Run clang-format before merging.

@@ -22,7 +22,7 @@ namespace atomic {

enum OrderingTy {
relaxed = __ATOMIC_RELAXED,
aquire = __ATOMIC_ACQUIRE,
acquire = __ATOMIC_ACQUIRE,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code change. Careful review, please. The left-hand side is misspelled but the right-hand side is spelled correctly.

Similar changes are made below in other files in this codebase. It could be a problem if code OUTSIDE the offload directory were built on the same misspelling.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I noticed, that. Seems like it's updated everywhere it's used.

@jhuber6 jhuber6 changed the title [offload] Fix typos discovered by codespell [Offload][NFC] Fix typos discovered by codespell Jan 30, 2025
@jhuber6 jhuber6 merged commit 1f56bb3 into llvm:main Jan 31, 2025
6 checks passed
@cclauss cclauss deleted the offload-codespell branch January 31, 2025 15:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants