Skip to content

Commit 5a8897c

Browse files
wildea01Ingo Molnar
authored andcommitted
locking/atomics/alpha: Add smp_read_barrier_depends() to _release()/_relaxed() atomics
As part of the fight against smp_read_barrier_depends(), we require dependency ordering to be preserved when a dependency is headed by a load performed using an atomic operation. This patch adds smp_read_barrier_depends() to the _release() and _relaxed() atomics on alpha, which otherwise lack anything to enforce dependency ordering. Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
1 parent 59ecbbe commit 5a8897c

File tree

1 file changed

+13
-0
lines changed

1 file changed

+13
-0
lines changed

arch/alpha/include/asm/atomic.h

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,15 @@
1313
* than regular operations.
1414
*/
1515

16+
/*
17+
* To ensure dependency ordering is preserved for the _relaxed and
18+
* _release atomics, an smp_read_barrier_depends() is unconditionally
19+
* inserted into the _relaxed variants, which are used to build the
20+
* barriered versions. To avoid redundant back-to-back fences, we can
21+
* define the _acquire and _fence versions explicitly.
22+
*/
23+
#define __atomic_op_acquire(op, args...) op##_relaxed(args)
24+
#define __atomic_op_fence __atomic_op_release
1625

1726
#define ATOMIC_INIT(i) { (i) }
1827
#define ATOMIC64_INIT(i) { (i) }
@@ -60,6 +69,7 @@ static inline int atomic_##op##_return_relaxed(int i, atomic_t *v) \
6069
".previous" \
6170
:"=&r" (temp), "=m" (v->counter), "=&r" (result) \
6271
:"Ir" (i), "m" (v->counter) : "memory"); \
72+
smp_read_barrier_depends(); \
6373
return result; \
6474
}
6575

@@ -77,6 +87,7 @@ static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v) \
7787
".previous" \
7888
:"=&r" (temp), "=m" (v->counter), "=&r" (result) \
7989
:"Ir" (i), "m" (v->counter) : "memory"); \
90+
smp_read_barrier_depends(); \
8091
return result; \
8192
}
8293

@@ -111,6 +122,7 @@ static __inline__ long atomic64_##op##_return_relaxed(long i, atomic64_t * v) \
111122
".previous" \
112123
:"=&r" (temp), "=m" (v->counter), "=&r" (result) \
113124
:"Ir" (i), "m" (v->counter) : "memory"); \
125+
smp_read_barrier_depends(); \
114126
return result; \
115127
}
116128

@@ -128,6 +140,7 @@ static __inline__ long atomic64_fetch_##op##_relaxed(long i, atomic64_t * v) \
128140
".previous" \
129141
:"=&r" (temp), "=m" (v->counter), "=&r" (result) \
130142
:"Ir" (i), "m" (v->counter) : "memory"); \
143+
smp_read_barrier_depends(); \
131144
return result; \
132145
}
133146

0 commit comments

Comments
 (0)