Skip to content

Commit 3f6aa0b

Browse files
Mikael Petterssondavem330
authored andcommitted
sparc32: unbreak arch_write_unlock()
The sparc32 version of arch_write_unlock() is just a plain assignment. Unfortunately this allows the compiler to schedule side-effects in a protected region to occur after the HW-level unlock, which is broken. E.g., the following trivial test case gets miscompiled: #include <linux/spinlock.h> rwlock_t lock; int counter; void foo(void) { write_lock(&lock); ++counter; write_unlock(&lock); } Fixed by adding a compiler memory barrier to arch_write_unlock(). The sparc64 version combines the barrier and assignment into a single asm(), and implements the operation as a static inline, so that's what I did too. Compile-tested with sparc32_defconfig + CONFIG_SMP=y. Signed-off-by: Mikael Pettersson <[email protected]> Signed-off-by: David S. Miller <[email protected]>
1 parent a0fba3e commit 3f6aa0b

File tree

1 file changed

+9
-2
lines changed

1 file changed

+9
-2
lines changed

arch/sparc/include/asm/spinlock_32.h

Lines changed: 9 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -131,6 +131,15 @@ static inline void arch_write_lock(arch_rwlock_t *rw)
131131
*(volatile __u32 *)&lp->lock = ~0U;
132132
}
133133

134+
static void inline arch_write_unlock(arch_rwlock_t *lock)
135+
{
136+
__asm__ __volatile__(
137+
" st %%g0, [%0]"
138+
: /* no outputs */
139+
: "r" (lock)
140+
: "memory");
141+
}
142+
134143
static inline int arch_write_trylock(arch_rwlock_t *rw)
135144
{
136145
unsigned int val;
@@ -175,8 +184,6 @@ static inline int __arch_read_trylock(arch_rwlock_t *rw)
175184
res; \
176185
})
177186

178-
#define arch_write_unlock(rw) do { (rw)->lock = 0; } while(0)
179-
180187
#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock)
181188
#define arch_read_lock_flags(rw, flags) arch_read_lock(rw)
182189
#define arch_write_lock_flags(rw, flags) arch_write_lock(rw)

0 commit comments

Comments
 (0)