Skip to content

Commit 844342c

Browse files
author
Yonghong Song
committed
[BPF] Do atomic_fetch_*() pattern matching with memory ordering
For atomic fetch_and_*() operations, do pattern matching with memory ordering seq_cst, acq_rel, release, acquire and monotonic (relaxed). For fetch_and_*() operations with seq_cst/acq_rel/release/acquire ordering, atomic_fetch_*() instructions are generated. For monotonic ordering, locked insns are generated if return value is not used. Otherwise, atomic_fetch_*() insns are used. The main motivation is to resolve the kernel issue [1]. The following are memory ordering are supported: seq_cst, acq_rel, release, acquire, relaxed Current gcc style __sync_fetch_and_*() operations are all seq_cst. To use explicit memory ordering, the _Atomic type is needed. The following is an example: ``` $ cat test.c \#include <stdatomic.h> void f1(_Atomic int *i) { (void)__c11_atomic_fetch_and(i, 10, memory_order_relaxed); } void f2(_Atomic int *i) { (void)__c11_atomic_fetch_and(i, 10, memory_order_acquire); } void f3(_Atomic int *i) { (void)__c11_atomic_fetch_and(i, 10, memory_order_seq_cst); } $ cat run.sh clang -I/home/yhs/work/bpf-next/tools/testing/selftests/bpf -O2 --target=bpf -c test.c -o test.o && llvm-objdump -d test.o $ ./run.sh test.o: file format elf64-bpf Disassembly of section .text: 0000000000000000 <f1>: 0: b4 02 00 00 0a 00 00 00 w2 = 0xa 1: c3 21 00 00 50 00 00 00 lock *(u32 *)(r1 + 0x0) &= w2 2: 95 00 00 00 00 00 00 00 exit 0000000000000018 <f2>: 3: b4 02 00 00 0a 00 00 00 w2 = 0xa 4: c3 21 00 00 51 00 00 00 w2 = atomic_fetch_and((u32 *)(r1 + 0x0), w2) 5: 95 00 00 00 00 00 00 00 exit 0000000000000030 <f3>: 6: b4 02 00 00 0a 00 00 00 w2 = 0xa 7: c3 21 00 00 51 00 00 00 w2 = atomic_fetch_and((u32 *)(r1 + 0x0), w2) 8: 95 00 00 00 00 00 00 00 exit ``` The following is another example where return value is used: ``` $ cat test1.c \#include <stdatomic.h> int f1(_Atomic int *i) { return __c11_atomic_fetch_and(i, 10, memory_order_relaxed); } int f2(_Atomic int *i) { return __c11_atomic_fetch_and(i, 10, memory_order_acquire); } int f3(_Atomic int *i) { return __c11_atomic_fetch_and(i, 10, memory_order_seq_cst); } $ cat run.sh clang -I/home/yhs/work/bpf-next/tools/testing/selftests/bpf -O2 --target=bpf -c test1.c -o test1.o && llvm-objdump -d test1.o $ ./run.sh test.o: file format elf64-bpf Disassembly of section .text: 0000000000000000 <f1>: 0: b4 00 00 00 0a 00 00 00 w0 = 0xa 1: c3 01 00 00 51 00 00 00 w0 = atomic_fetch_and((u32 *)(r1 + 0x0), w0) 2: 95 00 00 00 00 00 00 00 exit 0000000000000018 <f2>: 3: b4 00 00 00 0a 00 00 00 w0 = 0xa 4: c3 01 00 00 51 00 00 00 w0 = atomic_fetch_and((u32 *)(r1 + 0x0), w0) 5: 95 00 00 00 00 00 00 00 exit 0000000000000030 <f3>: 6: b4 00 00 00 0a 00 00 00 w0 = 0xa 7: c3 01 00 00 51 00 00 00 w0 = atomic_fetch_and((u32 *)(r1 + 0x0), w0) 8: 95 00 00 00 00 00 00 00 exit ``` You can see that for relaxed memory ordering, if return value is used, atomic_fetch_and() insn is used. Otherwise, if return value is not used, locked insn is used. Here is another example with global _Atomic variable: ``` $ cat test3.c \#include <stdatomic.h> _Atomic int i; void f1(void) { (void)__c11_atomic_fetch_and(&i, 10, memory_order_relaxed); } void f2(void) { (void)__c11_atomic_fetch_and(&i, 10, memory_order_seq_cst); } $ cat run.sh clang -I/home/yhs/work/bpf-next/tools/testing/selftests/bpf -O2 --target=bpf -c test3.c -o test3.o && llvm-objdump -d test3.o $ ./run.sh test3.o: file format elf64-bpf Disassembly of section .text: 0000000000000000 <f1>: 0: b4 01 00 00 0a 00 00 00 w1 = 0xa 1: 18 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 r2 = 0x0 ll 3: c3 12 00 00 50 00 00 00 lock *(u32 *)(r2 + 0x0) &= w1 4: 95 00 00 00 00 00 00 00 exit 0000000000000028 <f2>: 5: b4 01 00 00 0a 00 00 00 w1 = 0xa 6: 18 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 r2 = 0x0 ll 8: c3 12 00 00 51 00 00 00 w1 = atomic_fetch_and((u32 *)(r2 + 0x0), w1) 9: 95 00 00 00 00 00 00 00 exit ``` Note that in the above compilations, '-g' is not used. The reason is due to the following IR related to _Atomic type: ``` $clang -I/home/yhs/work/bpf-next/tools/testing/selftests/bpf -O2 --target=bpf -g -S -emit-llvm test3.c ``` The related debug info for test3.c: ``` !0 = !DIGlobalVariableExpression(var: !1, expr: !DIExpression()) !1 = distinct !DIGlobalVariable(name: "i", scope: !2, file: !3, line: 3, type: !16, isLocal: false, isDefinition: true) ... !16 = !DIDerivedType(tag: DW_TAG_atomic_type, baseType: !17) !17 = !DIBasicType(name: "int", size: 32, encoding: DW_ATE_signed) ``` If compiling test.c, the related debug info: ``` ... !19 = distinct !DISubprogram(name: "f1", scope: !1, file: !1, line: 3, type: !20, scopeLine: 3, flags: DIFlagPrototyped | DIFlagAllCallsDescribed, spFlags: DISPFlagDefinition | DISPFlagOptimized, unit: !0, retainedNodes: !25) !20 = !DISubroutineType(types: !21) !21 = !{null, !22} !22 = !DIDerivedType(tag: DW_TAG_pointer_type, baseType: !23, size: 64) !23 = !DIDerivedType(tag: DW_TAG_atomic_type, baseType: !24) !24 = !DIBasicType(name: "int", size: 32, encoding: DW_ATE_signed) !25 = !{!26} !26 = !DILocalVariable(name: "i", arg: 1, scope: !19, file: !1, line: 3, type: !22) ``` All the above suggests _Atomic behaves like a modifier (e.g. const, restrict, volatile). This seems true based on doc [1]. Without proper handling DW_TAG_atomic_type, llvm BTF generation will be incorrect since the current implementation assumes no existence of DW_TAG_atomic_type. So we have two choices here: (1). llvm bpf backend processes DW_TAG_atomic_type but ignores it in BTF encoding. (2). Add another type, e.g., BTF_KIND_ATOMIC to BTF. BTF_KIND_ATOMIC behaves as a modifier like const/volatile/restrict. For choice (1), llvm bpf backend should skip dwarf::DW_TAG_atomic_type during BTF generation whenever necessary. For choice (2), BTF_KIND_ATOMIC will be added to BTF so llvm backend and kernel needs to handle that properly. The main advantage of it probably is to maintain this atomic type so it is also available to skeleton. But I think for skeleton a raw type might be good enough unless user space intends to do some atomic operation with that, which is a unlikely case. So I choose choice (1) in this implementation. [1] https://lore.kernel.org/bpf/[email protected]/ [2] https://dwarfstd.org/issues/131112.1.html
1 parent ec31f76 commit 844342c

File tree

3 files changed

+120
-21
lines changed

3 files changed

+120
-21
lines changed

clang/lib/Basic/Targets/BPF.cpp

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -38,6 +38,7 @@ void BPFTargetInfo::getTargetDefines(const LangOptions &Opts,
3838

3939
Builder.defineMacro("__BPF_FEATURE_ADDR_SPACE_CAST");
4040
Builder.defineMacro("__BPF_FEATURE_MAY_GOTO");
41+
Builder.defineMacro("__BPF_FEATURE_ATOMIC_MEM_ORDERING");
4142

4243
if (CPU.empty())
4344
CPU = "v3";

llvm/lib/Target/BPF/BPFInstrInfo.td

Lines changed: 114 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -826,13 +826,12 @@ let Predicates = [BPFNoALU32] in {
826826
}
827827

828828
// Atomic Fetch-and-<add, and, or, xor> operations
829-
class XFALU64<BPFWidthModifer SizeOp, BPFArithOp Opc, string OpcodeStr,
830-
string OpcStr, PatFrag OpNode>
829+
class XFALU64<BPFWidthModifer SizeOp, BPFArithOp Opc, string OpcodeStr, string OpcStr>
831830
: TYPE_LD_ST<BPF_ATOMIC.Value, SizeOp.Value,
832831
(outs GPR:$dst),
833832
(ins MEMri:$addr, GPR:$val),
834833
"$dst = atomic_fetch_"#OpcStr#"(("#OpcodeStr#" *)($addr), $val)",
835-
[(set GPR:$dst, (OpNode ADDRri:$addr, GPR:$val))]> {
834+
[]> {
836835
bits<4> dst;
837836
bits<20> addr;
838837

@@ -844,13 +843,12 @@ class XFALU64<BPFWidthModifer SizeOp, BPFArithOp Opc, string OpcodeStr,
844843
let BPFClass = BPF_STX;
845844
}
846845

847-
class XFALU32<BPFWidthModifer SizeOp, BPFArithOp Opc, string OpcodeStr,
848-
string OpcStr, PatFrag OpNode>
846+
class XFALU32<BPFWidthModifer SizeOp, BPFArithOp Opc, string OpcodeStr, string OpcStr>
849847
: TYPE_LD_ST<BPF_ATOMIC.Value, SizeOp.Value,
850848
(outs GPR32:$dst),
851849
(ins MEMri:$addr, GPR32:$val),
852850
"$dst = atomic_fetch_"#OpcStr#"(("#OpcodeStr#" *)($addr), $val)",
853-
[(set GPR32:$dst, (OpNode ADDRri:$addr, GPR32:$val))]> {
851+
[]> {
854852
bits<4> dst;
855853
bits<20> addr;
856854

@@ -864,26 +862,122 @@ class XFALU32<BPFWidthModifer SizeOp, BPFArithOp Opc, string OpcodeStr,
864862

865863
let Constraints = "$dst = $val" in {
866864
let Predicates = [BPFHasALU32], DecoderNamespace = "BPFALU32" in {
867-
def XFADDW32 : XFALU32<BPF_W, BPF_ADD, "u32", "add", atomic_load_add_i32>;
868-
def XFANDW32 : XFALU32<BPF_W, BPF_AND, "u32", "and", atomic_load_and_i32>;
869-
def XFORW32 : XFALU32<BPF_W, BPF_OR, "u32", "or", atomic_load_or_i32>;
870-
def XFXORW32 : XFALU32<BPF_W, BPF_XOR, "u32", "xor", atomic_load_xor_i32>;
865+
def XFADDW32 : XFALU32<BPF_W, BPF_ADD, "u32", "add">;
866+
def XFANDW32 : XFALU32<BPF_W, BPF_AND, "u32", "and">;
867+
def XFORW32 : XFALU32<BPF_W, BPF_OR, "u32", "or">;
868+
def XFXORW32 : XFALU32<BPF_W, BPF_XOR, "u32", "xor">;
871869
}
872870

873871
let Predicates = [BPFHasALU32] in {
874-
def XFADDD : XFALU64<BPF_DW, BPF_ADD, "u64", "add", atomic_load_add_i64>;
872+
def XFADDD : XFALU64<BPF_DW, BPF_ADD, "u64", "add">;
875873
}
876-
def XFANDD : XFALU64<BPF_DW, BPF_AND, "u64", "and", atomic_load_and_i64>;
877-
def XFORD : XFALU64<BPF_DW, BPF_OR, "u64", "or", atomic_load_or_i64>;
878-
def XFXORD : XFALU64<BPF_DW, BPF_XOR, "u64", "xor", atomic_load_xor_i64>;
874+
def XFANDD : XFALU64<BPF_DW, BPF_AND, "u64", "and">;
875+
def XFORD : XFALU64<BPF_DW, BPF_OR, "u64", "or">;
876+
def XFXORD : XFALU64<BPF_DW, BPF_XOR, "u64", "xor">;
879877
}
880878

881-
// atomic_load_sub can be represented as a neg followed
882-
// by an atomic_load_add.
883-
def : Pat<(atomic_load_sub_i32 ADDRri:$addr, GPR32:$val),
884-
(XFADDW32 ADDRri:$addr, (NEG_32 GPR32:$val))>;
885-
def : Pat<(atomic_load_sub_i64 ADDRri:$addr, GPR:$val),
886-
(XFADDD ADDRri:$addr, (NEG_64 GPR:$val))>;
879+
let Predicates = [BPFHasALU32] in {
880+
foreach P = [// add
881+
[atomic_load_add_i32_monotonic, XADDW32],
882+
[atomic_load_add_i32_acquire, XFADDW32],
883+
[atomic_load_add_i32_release, XFADDW32],
884+
[atomic_load_add_i32_acq_rel, XFADDW32],
885+
[atomic_load_add_i32_seq_cst, XFADDW32],
886+
// and
887+
[atomic_load_and_i32_monotonic, XANDW32],
888+
[atomic_load_and_i32_acquire, XFANDW32],
889+
[atomic_load_and_i32_release, XFANDW32],
890+
[atomic_load_and_i32_acq_rel, XFANDW32],
891+
[atomic_load_and_i32_seq_cst, XFANDW32],
892+
// or
893+
[atomic_load_or_i32_monotonic, XORW32],
894+
[atomic_load_or_i32_acquire, XFORW32],
895+
[atomic_load_or_i32_release, XFORW32],
896+
[atomic_load_or_i32_acq_rel, XFORW32],
897+
[atomic_load_or_i32_seq_cst, XFORW32],
898+
// xor
899+
[atomic_load_xor_i32_monotonic, XXORW32],
900+
[atomic_load_xor_i32_acquire, XFXORW32],
901+
[atomic_load_xor_i32_release, XFXORW32],
902+
[atomic_load_xor_i32_acq_rel, XFXORW32],
903+
[atomic_load_xor_i32_seq_cst, XFXORW32],
904+
] in {
905+
def : Pat<(P[0] ADDRri:$addr, GPR32:$val), (P[1] ADDRri:$addr, GPR32:$val)>;
906+
}
907+
908+
// atomic_load_sub can be represented as a neg followed
909+
// by an atomic_load_add.
910+
foreach P = [[atomic_load_sub_i32_monotonic, XADDW32],
911+
[atomic_load_sub_i32_acquire, XFADDW32],
912+
[atomic_load_sub_i32_release, XFADDW32],
913+
[atomic_load_sub_i32_acq_rel, XFADDW32],
914+
[atomic_load_sub_i32_seq_cst, XFADDW32],
915+
] in {
916+
def : Pat<(P[0] ADDRri:$addr, GPR32:$val), (P[1] ADDRri:$addr, (NEG_32 GPR32:$val))>;
917+
}
918+
919+
foreach P = [// add
920+
[atomic_load_add_i64_monotonic, XADDD],
921+
[atomic_load_add_i64_acquire, XFADDD],
922+
[atomic_load_add_i64_release, XFADDD],
923+
[atomic_load_add_i64_acq_rel, XFADDD],
924+
[atomic_load_add_i64_seq_cst, XFADDD],
925+
] in {
926+
def : Pat<(P[0] ADDRri:$addr, GPR:$val), (P[1] ADDRri:$addr, GPR:$val)>;
927+
}
928+
}
929+
930+
foreach P = [[atomic_load_sub_i64_monotonic, XADDD],
931+
[atomic_load_sub_i64_acquire, XFADDD],
932+
[atomic_load_sub_i64_release, XFADDD],
933+
[atomic_load_sub_i64_acq_rel, XFADDD],
934+
[atomic_load_sub_i64_seq_cst, XFADDD],
935+
] in {
936+
def : Pat<(P[0] ADDRri:$addr, GPR:$val), (P[1] ADDRri:$addr, (NEG_64 GPR:$val))>;
937+
}
938+
939+
// Borrow the idea from X86InstrFragments.td
940+
class binop_no_use<SDPatternOperator operator>
941+
: PatFrag<(ops node:$A, node:$B),
942+
(operator node:$A, node:$B),
943+
[{ return SDValue(N, 0).use_empty(); }]>;
944+
945+
class binop_has_use<SDPatternOperator operator>
946+
: PatFrag<(ops node:$A, node:$B),
947+
(operator node:$A, node:$B),
948+
[{ return !SDValue(N, 0).use_empty(); }]>;
949+
950+
foreach op = [add, and, or, xor] in {
951+
def atomic_load_ # op # _i64_monotonic_nu:
952+
binop_no_use <!cast<SDPatternOperator>("atomic_load_"#op# _i64_monotonic)>;
953+
def atomic_load_ # op # _i64_monotonic_hu:
954+
binop_has_use<!cast<SDPatternOperator>("atomic_load_"#op# _i64_monotonic)>;
955+
}
956+
957+
foreach P = [// and
958+
[atomic_load_and_i64_monotonic_nu, XANDD],
959+
[atomic_load_and_i64_monotonic_hu, XFANDD],
960+
[atomic_load_and_i64_acquire, XFANDD],
961+
[atomic_load_and_i64_release, XFANDD],
962+
[atomic_load_and_i64_acq_rel, XFANDD],
963+
[atomic_load_and_i64_seq_cst, XFANDD],
964+
// or
965+
[atomic_load_or_i64_monotonic_nu, XORD],
966+
[atomic_load_or_i64_monotonic_hu, XFORD],
967+
[atomic_load_or_i64_acquire, XFORD],
968+
[atomic_load_or_i64_release, XFORD],
969+
[atomic_load_or_i64_acq_rel, XFORD],
970+
[atomic_load_or_i64_seq_cst, XFORD],
971+
// xor
972+
[atomic_load_xor_i64_monotonic_nu, XXORD],
973+
[atomic_load_xor_i64_monotonic_hu, XFXORD],
974+
[atomic_load_xor_i64_acquire, XFXORD],
975+
[atomic_load_xor_i64_release, XFXORD],
976+
[atomic_load_xor_i64_acq_rel, XFXORD],
977+
[atomic_load_xor_i64_seq_cst, XFXORD],
978+
] in {
979+
def : Pat<(P[0] ADDRri:$addr, GPR:$val), (P[1] ADDRri:$addr, GPR:$val)>;
980+
}
887981

888982
// Atomic Exchange
889983
class XCHG<BPFWidthModifer SizeOp, string OpcodeStr, PatFrag OpNode>

llvm/lib/Target/BPF/BPFMIChecking.cpp

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@ static bool hasLiveDefs(const MachineInstr &MI, const TargetRegisterInfo *TRI) {
118118

119119
RegIsGPR64 = GPR64RegClass->contains(MO.getReg());
120120
if (!MO.isDead()) {
121-
// It is a GPR64 live Def, we are sure it is live. */
121+
// It is a GPR64 live Def, we are sure it is live.
122122
if (RegIsGPR64)
123123
return true;
124124
// It is a GPR32 live Def, we are unsure whether it is really dead due to
@@ -153,6 +153,10 @@ static bool hasLiveDefs(const MachineInstr &MI, const TargetRegisterInfo *TRI) {
153153
}
154154

155155
void BPFMIPreEmitChecking::processAtomicInsts() {
156+
if (MF->getSubtarget<BPFSubtarget>().getHasJmp32())
157+
return;
158+
159+
// Only check for cpu version 1 and 2.
156160
for (MachineBasicBlock &MBB : *MF) {
157161
for (MachineInstr &MI : MBB) {
158162
if (MI.getOpcode() != BPF::XADDW && MI.getOpcode() != BPF::XADDD)

0 commit comments

Comments
 (0)