Skip to content

Commit cb13b42

Browse files
andrea-parriIngo Molnar
authored andcommitted
locking/xchg/alpha: Add unconditional memory barrier to cmpxchg()
Continuing along with the fight against smp_read_barrier_depends() [1] (or rather, against its improper use), add an unconditional barrier to cmpxchg. This guarantees that dependency ordering is preserved when a dependency is headed by an unsuccessful cmpxchg. As it turns out, the change could enable further simplification of LKMM as proposed in [2]. [1] https://marc.info/?l=linux-kernel&m=150884953419377&w=2 https://marc.info/?l=linux-kernel&m=150884946319353&w=2 https://marc.info/?l=linux-kernel&m=151215810824468&w=2 https://marc.info/?l=linux-kernel&m=151215816324484&w=2 [2] https://marc.info/?l=linux-kernel&m=151881978314872&w=2 Signed-off-by: Andrea Parri <[email protected]> Acked-by: Peter Zijlstra <[email protected]> Acked-by: Paul E. McKenney <[email protected]> Cc: Alan Stern <[email protected]> Cc: Ivan Kokshaysky <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Matt Turner <[email protected]> Cc: Richard Henderson <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Will Deacon <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
1 parent 88e77dc commit cb13b42

File tree

1 file changed

+7
-8
lines changed

1 file changed

+7
-8
lines changed

arch/alpha/include/asm/xchg.h

Lines changed: 7 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -128,10 +128,9 @@ ____xchg(, volatile void *ptr, unsigned long x, int size)
128128
* store NEW in MEM. Return the initial value in MEM. Success is
129129
* indicated by comparing RETURN with OLD.
130130
*
131-
* The memory barrier should be placed in SMP only when we actually
132-
* make the change. If we don't change anything (so if the returned
133-
* prev is equal to old) then we aren't acquiring anything new and
134-
* we don't need any memory barrier as far I can tell.
131+
* The memory barrier is placed in SMP unconditionally, in order to
132+
* guarantee that dependency ordering is preserved when a dependency
133+
* is headed by an unsuccessful operation.
135134
*/
136135

137136
static inline unsigned long
@@ -150,8 +149,8 @@ ____cmpxchg(_u8, volatile char *m, unsigned char old, unsigned char new)
150149
" or %1,%2,%2\n"
151150
" stq_c %2,0(%4)\n"
152151
" beq %2,3f\n"
153-
__ASM__MB
154152
"2:\n"
153+
__ASM__MB
155154
".subsection 2\n"
156155
"3: br 1b\n"
157156
".previous"
@@ -177,8 +176,8 @@ ____cmpxchg(_u16, volatile short *m, unsigned short old, unsigned short new)
177176
" or %1,%2,%2\n"
178177
" stq_c %2,0(%4)\n"
179178
" beq %2,3f\n"
180-
__ASM__MB
181179
"2:\n"
180+
__ASM__MB
182181
".subsection 2\n"
183182
"3: br 1b\n"
184183
".previous"
@@ -200,8 +199,8 @@ ____cmpxchg(_u32, volatile int *m, int old, int new)
200199
" mov %4,%1\n"
201200
" stl_c %1,%2\n"
202201
" beq %1,3f\n"
203-
__ASM__MB
204202
"2:\n"
203+
__ASM__MB
205204
".subsection 2\n"
206205
"3: br 1b\n"
207206
".previous"
@@ -223,8 +222,8 @@ ____cmpxchg(_u64, volatile long *m, unsigned long old, unsigned long new)
223222
" mov %4,%1\n"
224223
" stq_c %1,%2\n"
225224
" beq %1,3f\n"
226-
__ASM__MB
227225
"2:\n"
226+
__ASM__MB
228227
".subsection 2\n"
229228
"3: br 1b\n"
230229
".previous"

0 commit comments

Comments
 (0)