Skip to content

Commit 202fb4e

Browse files
wildea01ctmarinas
authored andcommitted
arm64: spinlock: Fix theoretical trylock() A-B-A with LSE atomics
If the spinlock "next" ticket wraps around between the initial LDR and the cmpxchg in the LSE version of spin_trylock, then we can erroneously think that we have successfuly acquired the lock because we only check whether the next ticket return by the cmpxchg is equal to the owner ticket in our updated lock word. This patch fixes the issue by performing a full 32-bit check of the lock word when trying to determine whether or not the CASA instruction updated memory. Reported-by: Catalin Marinas <[email protected]> Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
1 parent ec89ab5 commit 202fb4e

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

arch/arm64/include/asm/spinlock.h

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -87,8 +87,8 @@ static inline int arch_spin_trylock(arch_spinlock_t *lock)
8787
" cbnz %w1, 1f\n"
8888
" add %w1, %w0, %3\n"
8989
" casa %w0, %w1, %2\n"
90-
" and %w1, %w1, #0xffff\n"
91-
" eor %w1, %w1, %w0, lsr #16\n"
90+
" sub %w1, %w1, %3\n"
91+
" eor %w1, %w1, %w0\n"
9292
"1:")
9393
: "=&r" (lockval), "=&r" (tmp), "+Q" (*lock)
9494
: "I" (1 << TICKET_SHIFT)

0 commit comments

Comments
 (0)