Skip to content

Commit 7064a73

Browse files
author
Alexei Starovoitov
committed
Merge branch 'Atomics for eBPF'
Brendan Jackman says: ==================== There's still one unresolved review comment from John[3] which I will resolve with a followup patch. Differences from v6->v7 [1]: * Fixed riscv build error detected by 0-day robot. Differences from v5->v6 [1]: * Carried Björn Töpel's ack for RISC-V code, plus a couple more acks from Yonhgong. * Doc fixups. * Trivial cleanups. Differences from v4->v5 [1]: * Fixed bogus type casts in interpreter that led to warnings from the 0day robot. * Dropped feature-detection for Clang per Andrii's suggestion in [4]. The selftests will now fail to build unless you have llvm-project commit 286daafd6512. The ENABLE_ATOMICS_TEST macro is still needed to support the no_alu32 tests. * Carried some Acks from John and Yonghong. * Dropped confusing usage of __atomic_exchange from prog_test in favour of __sync_lock_test_and_set. * [Really] got rid of all the forest of instruction macros (BPF_ATOMIC_FETCH_ADD and friends); now there's just BPF_ATOMIC_OP to define all the instructions as we use them in the verifier tests. This makes the atomic ops less special in that API, and I don't think the resulting usage is actually any harder to read. Differences from v3->v4 [1]: * Added one Ack from Yonghong. He acked some other patches but those have now changed non-trivally so I didn't add those acks. * Fixups to commit messages. * Fixed disassembly and comments: first arg to atomic_fetch_* is a pointer. * Improved prog_test efficiency. BPF progs are now all loaded in a single call, then the skeleton is re-used for each subtest. * Dropped use of tools/build/feature in favour of a one-liner in the Makefile. * Dropped the commit that created an emit_neg helper in the x86 JIT. It's not used any more (it wasn't used in v3 either). * Combined all the different filter.h macros (used to be BPF_ATOMIC_ADD, BPF_ATOMIC_FETCH_ADD, BPF_ATOMIC_AND, etc) into just BPF_ATOMIC32 and BPF_ATOMIC64. * Removed some references to BPF_STX_XADD from tools/, samples/ and lib/ that I missed before. Differences from v2->v3 [1]: * More minor fixes and naming/comment changes * Dropped atomic subtract: compilers can implement this by preceding an atomic add with a NEG instruction (which is what the x86 JIT did under the hood anyway). * Dropped the use of -mcpu=v4 in the Clang BPF command-line; there is no longer an architecture version bump. Instead a feature test is added to Kbuild - it builds a source file to check if Clang supports BPF atomics. * Fixed the prog_test so it no longer breaks test_progs-no_alu32. This requires some ifdef acrobatics to avoid complicating the prog_tests model where the same userspace code exercises both the normal and no_alu32 BPF test objects, using the same skeleton header. Differences from v1->v2 [1]: * Fixed mistakes in the netronome driver * Addd sub, add, or, xor operations * The above led to some refactors to keep things readable. (Maybe I should have just waited until I'd implemented these before starting the review...) * Replaced BPF_[CMP]SET | BPF_FETCH with just BPF_[CMP]XCHG, which include the BPF_FETCH flag * Added a bit of documentation. Suggestions welcome for more places to dump this info... The prog_test that's added depends on Clang/LLVM features added by Yonghong in commit 286daafd6512 (was https://reviews.llvm.org/D72184). This only includes a JIT implementation for x86_64 - I don't plan to implement JIT support myself for other architectures. Operations ========== This patchset adds atomic operations to the eBPF instruction set. The use-case that motivated this work was a trivial and efficient way to generate globally-unique cookies in BPF progs, but I think it's obvious that these features are pretty widely applicable. The instructions that are added here can be summarised with this list of kernel operations: * atomic[64]_[fetch_]add * atomic[64]_[fetch_]and * atomic[64]_[fetch_]or * atomic[64]_xchg * atomic[64]_cmpxchg The following are left out of scope for this effort: * 16 and 8 bit operations * Explicit memory barriers Encoding ======== I originally planned to add new values for bpf_insn.opcode. This was rather unpleasant: the opcode space has holes in it but no entire instruction classes[2]. Yonghong Song had a better idea: use the immediate field of the existing STX XADD instruction to encode the operation. This works nicely, without breaking existing programs, because the immediate field is currently reserved-must-be-zero, and extra-nicely because BPF_ADD happens to be zero. Note that this of course makes immediate-source atomic operations impossible. It's hard to imagine a measurable speedup from such instructions, and if it existed it would certainly not benefit x86, which has no support for them. The BPF_OP opcode fields are re-used in the immediate, and an additional flag BPF_FETCH is used to mark instructions that should fetch a pre-modification value from memory. So, BPF_XADD is now called BPF_ATOMIC (the old name is kept to avoid breaking userspace builds), and where we previously had .imm = 0, we now have .imm = BPF_ADD (which is 0). Operands ======== Reg-source eBPF instructions only have two operands, while these atomic operations have up to four. To avoid needing to encode additional operands, then: - One of the input registers is re-used as an output register (e.g. atomic_fetch_add both reads from and writes to the source register). - Where necessary (i.e. for cmpxchg) , R0 is "hard-coded" as one of the operands. This approach also allows the new eBPF instructions to map directly to single x86 instructions. [1] Previous iterations: v1: https://lore.kernel.org/bpf/[email protected]/ v2: https://lore.kernel.org/bpf/[email protected]/ v3: https://lore.kernel.org/bpf/[email protected]/ v4: https://lore.kernel.org/bpf/[email protected]/ v5: https://lore.kernel.org/bpf/[email protected]/ v6: https://lore.kernel.org/bpf/[email protected]/ [2] Visualisation of eBPF opcode space: https://gist.github.com/bjackman/00fdad2d5dfff601c1918bc29b16e778 [3] Comment from John about propagating bounds in verifier: https://lore.kernel.org/bpf/[email protected]/ [4] Mail from Andrii about not supporting old Clang in selftests: https://lore.kernel.org/bpf/CAEf4BzYBddPaEzRUs=jaWSo5kbf=LZdb7geAUVj85GxLQztuAQ@mail.gmail.com/ ==================== Signed-off-by: Alexei Starovoitov <[email protected]>
2 parents bade5c5 + de94857 commit 7064a73

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

44 files changed

+1466
-212
lines changed

Documentation/networking/filter.rst

Lines changed: 50 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1006,13 +1006,13 @@ Size modifier is one of ...
10061006

10071007
Mode modifier is one of::
10081008

1009-
BPF_IMM 0x00 /* used for 32-bit mov in classic BPF and 64-bit in eBPF */
1010-
BPF_ABS 0x20
1011-
BPF_IND 0x40
1012-
BPF_MEM 0x60
1013-
BPF_LEN 0x80 /* classic BPF only, reserved in eBPF */
1014-
BPF_MSH 0xa0 /* classic BPF only, reserved in eBPF */
1015-
BPF_XADD 0xc0 /* eBPF only, exclusive add */
1009+
BPF_IMM 0x00 /* used for 32-bit mov in classic BPF and 64-bit in eBPF */
1010+
BPF_ABS 0x20
1011+
BPF_IND 0x40
1012+
BPF_MEM 0x60
1013+
BPF_LEN 0x80 /* classic BPF only, reserved in eBPF */
1014+
BPF_MSH 0xa0 /* classic BPF only, reserved in eBPF */
1015+
BPF_ATOMIC 0xc0 /* eBPF only, atomic operations */
10161016
10171017
eBPF has two non-generic instructions: (BPF_ABS | <size> | BPF_LD) and
10181018
(BPF_IND | <size> | BPF_LD) which are used to access packet data.
@@ -1044,11 +1044,50 @@ Unlike classic BPF instruction set, eBPF has generic load/store operations::
10441044
BPF_MEM | <size> | BPF_STX: *(size *) (dst_reg + off) = src_reg
10451045
BPF_MEM | <size> | BPF_ST: *(size *) (dst_reg + off) = imm32
10461046
BPF_MEM | <size> | BPF_LDX: dst_reg = *(size *) (src_reg + off)
1047-
BPF_XADD | BPF_W | BPF_STX: lock xadd *(u32 *)(dst_reg + off16) += src_reg
1048-
BPF_XADD | BPF_DW | BPF_STX: lock xadd *(u64 *)(dst_reg + off16) += src_reg
10491047
1050-
Where size is one of: BPF_B or BPF_H or BPF_W or BPF_DW. Note that 1 and
1051-
2 byte atomic increments are not supported.
1048+
Where size is one of: BPF_B or BPF_H or BPF_W or BPF_DW.
1049+
1050+
It also includes atomic operations, which use the immediate field for extra
1051+
encoding.
1052+
1053+
.imm = BPF_ADD, .code = BPF_ATOMIC | BPF_W | BPF_STX: lock xadd *(u32 *)(dst_reg + off16) += src_reg
1054+
.imm = BPF_ADD, .code = BPF_ATOMIC | BPF_DW | BPF_STX: lock xadd *(u64 *)(dst_reg + off16) += src_reg
1055+
1056+
The basic atomic operations supported are:
1057+
1058+
BPF_ADD
1059+
BPF_AND
1060+
BPF_OR
1061+
BPF_XOR
1062+
1063+
Each having equivalent semantics with the ``BPF_ADD`` example, that is: the
1064+
memory location addresed by ``dst_reg + off`` is atomically modified, with
1065+
``src_reg`` as the other operand. If the ``BPF_FETCH`` flag is set in the
1066+
immediate, then these operations also overwrite ``src_reg`` with the
1067+
value that was in memory before it was modified.
1068+
1069+
The more special operations are:
1070+
1071+
BPF_XCHG
1072+
1073+
This atomically exchanges ``src_reg`` with the value addressed by ``dst_reg +
1074+
off``.
1075+
1076+
BPF_CMPXCHG
1077+
1078+
This atomically compares the value addressed by ``dst_reg + off`` with
1079+
``R0``. If they match it is replaced with ``src_reg``, The value that was there
1080+
before is loaded back to ``R0``.
1081+
1082+
Note that 1 and 2 byte atomic operations are not supported.
1083+
1084+
Except ``BPF_ADD`` _without_ ``BPF_FETCH`` (for legacy reasons), all 4 byte
1085+
atomic operations require alu32 mode. Clang enables this mode by default in
1086+
architecture v3 (``-mcpu=v3``). For older versions it can be enabled with
1087+
``-Xclang -target-feature -Xclang +alu32``.
1088+
1089+
You may encounter BPF_XADD - this is a legacy name for BPF_ATOMIC, referring to
1090+
the exclusive-add operation encoded when the immediate field is zero.
10521091

10531092
eBPF has one 16-byte instruction: BPF_LD | BPF_DW | BPF_IMM which consists
10541093
of two consecutive ``struct bpf_insn`` 8-byte blocks and interpreted as single

arch/arm/net/bpf_jit_32.c

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1620,10 +1620,9 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
16201620
}
16211621
emit_str_r(dst_lo, tmp2, off, ctx, BPF_SIZE(code));
16221622
break;
1623-
/* STX XADD: lock *(u32 *)(dst + off) += src */
1624-
case BPF_STX | BPF_XADD | BPF_W:
1625-
/* STX XADD: lock *(u64 *)(dst + off) += src */
1626-
case BPF_STX | BPF_XADD | BPF_DW:
1623+
/* Atomic ops */
1624+
case BPF_STX | BPF_ATOMIC | BPF_W:
1625+
case BPF_STX | BPF_ATOMIC | BPF_DW:
16271626
goto notyet;
16281627
/* STX: *(size *)(dst + off) = src */
16291628
case BPF_STX | BPF_MEM | BPF_W:

arch/arm64/net/bpf_jit_comp.c

Lines changed: 12 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -875,10 +875,18 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
875875
}
876876
break;
877877

878-
/* STX XADD: lock *(u32 *)(dst + off) += src */
879-
case BPF_STX | BPF_XADD | BPF_W:
880-
/* STX XADD: lock *(u64 *)(dst + off) += src */
881-
case BPF_STX | BPF_XADD | BPF_DW:
878+
case BPF_STX | BPF_ATOMIC | BPF_W:
879+
case BPF_STX | BPF_ATOMIC | BPF_DW:
880+
if (insn->imm != BPF_ADD) {
881+
pr_err_once("unknown atomic op code %02x\n", insn->imm);
882+
return -EINVAL;
883+
}
884+
885+
/* STX XADD: lock *(u32 *)(dst + off) += src
886+
* and
887+
* STX XADD: lock *(u64 *)(dst + off) += src
888+
*/
889+
882890
if (!off) {
883891
reg = dst;
884892
} else {

arch/mips/net/ebpf_jit.c

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1423,8 +1423,8 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
14231423
case BPF_STX | BPF_H | BPF_MEM:
14241424
case BPF_STX | BPF_W | BPF_MEM:
14251425
case BPF_STX | BPF_DW | BPF_MEM:
1426-
case BPF_STX | BPF_W | BPF_XADD:
1427-
case BPF_STX | BPF_DW | BPF_XADD:
1426+
case BPF_STX | BPF_W | BPF_ATOMIC:
1427+
case BPF_STX | BPF_DW | BPF_ATOMIC:
14281428
if (insn->dst_reg == BPF_REG_10) {
14291429
ctx->flags |= EBPF_SEEN_FP;
14301430
dst = MIPS_R_SP;
@@ -1438,7 +1438,12 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
14381438
src = ebpf_to_mips_reg(ctx, insn, src_reg_no_fp);
14391439
if (src < 0)
14401440
return src;
1441-
if (BPF_MODE(insn->code) == BPF_XADD) {
1441+
if (BPF_MODE(insn->code) == BPF_ATOMIC) {
1442+
if (insn->imm != BPF_ADD) {
1443+
pr_err("ATOMIC OP %02x NOT HANDLED\n", insn->imm);
1444+
return -EINVAL;
1445+
}
1446+
14421447
/*
14431448
* If mem_off does not fit within the 9 bit ll/sc
14441449
* instruction immediate field, use a temp reg.

arch/powerpc/net/bpf_jit_comp64.c

Lines changed: 20 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -683,10 +683,18 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image,
683683
break;
684684

685685
/*
686-
* BPF_STX XADD (atomic_add)
686+
* BPF_STX ATOMIC (atomic ops)
687687
*/
688-
/* *(u32 *)(dst + off) += src */
689-
case BPF_STX | BPF_XADD | BPF_W:
688+
case BPF_STX | BPF_ATOMIC | BPF_W:
689+
if (insn->imm != BPF_ADD) {
690+
pr_err_ratelimited(
691+
"eBPF filter atomic op code %02x (@%d) unsupported\n",
692+
code, i);
693+
return -ENOTSUPP;
694+
}
695+
696+
/* *(u32 *)(dst + off) += src */
697+
690698
/* Get EA into TMP_REG_1 */
691699
EMIT(PPC_RAW_ADDI(b2p[TMP_REG_1], dst_reg, off));
692700
tmp_idx = ctx->idx * 4;
@@ -699,8 +707,15 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image,
699707
/* we're done if this succeeded */
700708
PPC_BCC_SHORT(COND_NE, tmp_idx);
701709
break;
702-
/* *(u64 *)(dst + off) += src */
703-
case BPF_STX | BPF_XADD | BPF_DW:
710+
case BPF_STX | BPF_ATOMIC | BPF_DW:
711+
if (insn->imm != BPF_ADD) {
712+
pr_err_ratelimited(
713+
"eBPF filter atomic op code %02x (@%d) unsupported\n",
714+
code, i);
715+
return -ENOTSUPP;
716+
}
717+
/* *(u64 *)(dst + off) += src */
718+
704719
EMIT(PPC_RAW_ADDI(b2p[TMP_REG_1], dst_reg, off));
705720
tmp_idx = ctx->idx * 4;
706721
EMIT(PPC_RAW_LDARX(b2p[TMP_REG_2], 0, b2p[TMP_REG_1], 0));

arch/riscv/net/bpf_jit_comp32.c

Lines changed: 16 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -881,7 +881,7 @@ static int emit_store_r64(const s8 *dst, const s8 *src, s16 off,
881881
const s8 *rd = bpf_get_reg64(dst, tmp1, ctx);
882882
const s8 *rs = bpf_get_reg64(src, tmp2, ctx);
883883

884-
if (mode == BPF_XADD && size != BPF_W)
884+
if (mode == BPF_ATOMIC && size != BPF_W)
885885
return -1;
886886

887887
emit_imm(RV_REG_T0, off, ctx);
@@ -899,7 +899,7 @@ static int emit_store_r64(const s8 *dst, const s8 *src, s16 off,
899899
case BPF_MEM:
900900
emit(rv_sw(RV_REG_T0, 0, lo(rs)), ctx);
901901
break;
902-
case BPF_XADD:
902+
case BPF_ATOMIC: /* Only BPF_ADD supported */
903903
emit(rv_amoadd_w(RV_REG_ZERO, lo(rs), RV_REG_T0, 0, 0),
904904
ctx);
905905
break;
@@ -1260,7 +1260,6 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
12601260
case BPF_STX | BPF_MEM | BPF_H:
12611261
case BPF_STX | BPF_MEM | BPF_W:
12621262
case BPF_STX | BPF_MEM | BPF_DW:
1263-
case BPF_STX | BPF_XADD | BPF_W:
12641263
if (BPF_CLASS(code) == BPF_ST) {
12651264
emit_imm32(tmp2, imm, ctx);
12661265
src = tmp2;
@@ -1271,8 +1270,21 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
12711270
return -1;
12721271
break;
12731272

1273+
case BPF_STX | BPF_ATOMIC | BPF_W:
1274+
if (insn->imm != BPF_ADD) {
1275+
pr_info_once(
1276+
"bpf-jit: not supported: atomic operation %02x ***\n",
1277+
insn->imm);
1278+
return -EFAULT;
1279+
}
1280+
1281+
if (emit_store_r64(dst, src, off, ctx, BPF_SIZE(code),
1282+
BPF_MODE(code)))
1283+
return -1;
1284+
break;
1285+
12741286
/* No hardware support for 8-byte atomics in RV32. */
1275-
case BPF_STX | BPF_XADD | BPF_DW:
1287+
case BPF_STX | BPF_ATOMIC | BPF_DW:
12761288
/* Fallthrough. */
12771289

12781290
notsupported:

arch/riscv/net/bpf_jit_comp64.c

Lines changed: 12 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1027,10 +1027,18 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
10271027
emit_add(RV_REG_T1, RV_REG_T1, rd, ctx);
10281028
emit_sd(RV_REG_T1, 0, rs, ctx);
10291029
break;
1030-
/* STX XADD: lock *(u32 *)(dst + off) += src */
1031-
case BPF_STX | BPF_XADD | BPF_W:
1032-
/* STX XADD: lock *(u64 *)(dst + off) += src */
1033-
case BPF_STX | BPF_XADD | BPF_DW:
1030+
case BPF_STX | BPF_ATOMIC | BPF_W:
1031+
case BPF_STX | BPF_ATOMIC | BPF_DW:
1032+
if (insn->imm != BPF_ADD) {
1033+
pr_err("bpf-jit: not supported: atomic operation %02x ***\n",
1034+
insn->imm);
1035+
return -EINVAL;
1036+
}
1037+
1038+
/* atomic_add: lock *(u32 *)(dst + off) += src
1039+
* atomic_add: lock *(u64 *)(dst + off) += src
1040+
*/
1041+
10341042
if (off) {
10351043
if (is_12b_int(off)) {
10361044
emit_addi(RV_REG_T1, rd, off, ctx);

arch/s390/net/bpf_jit_comp.c

Lines changed: 16 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1205,18 +1205,23 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
12051205
jit->seen |= SEEN_MEM;
12061206
break;
12071207
/*
1208-
* BPF_STX XADD (atomic_add)
1208+
* BPF_ATOMIC
12091209
*/
1210-
case BPF_STX | BPF_XADD | BPF_W: /* *(u32 *)(dst + off) += src */
1211-
/* laal %w0,%src,off(%dst) */
1212-
EMIT6_DISP_LH(0xeb000000, 0x00fa, REG_W0, src_reg,
1213-
dst_reg, off);
1214-
jit->seen |= SEEN_MEM;
1215-
break;
1216-
case BPF_STX | BPF_XADD | BPF_DW: /* *(u64 *)(dst + off) += src */
1217-
/* laalg %w0,%src,off(%dst) */
1218-
EMIT6_DISP_LH(0xeb000000, 0x00ea, REG_W0, src_reg,
1219-
dst_reg, off);
1210+
case BPF_STX | BPF_ATOMIC | BPF_DW:
1211+
case BPF_STX | BPF_ATOMIC | BPF_W:
1212+
if (insn->imm != BPF_ADD) {
1213+
pr_err("Unknown atomic operation %02x\n", insn->imm);
1214+
return -1;
1215+
}
1216+
1217+
/* *(u32/u64 *)(dst + off) += src
1218+
*
1219+
* BFW_W: laal %w0,%src,off(%dst)
1220+
* BPF_DW: laalg %w0,%src,off(%dst)
1221+
*/
1222+
EMIT6_DISP_LH(0xeb000000,
1223+
BPF_SIZE(insn->code) == BPF_W ? 0x00fa : 0x00ea,
1224+
REG_W0, src_reg, dst_reg, off);
12201225
jit->seen |= SEEN_MEM;
12211226
break;
12221227
/*

arch/sparc/net/bpf_jit_comp_64.c

Lines changed: 14 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1366,12 +1366,18 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
13661366
break;
13671367
}
13681368

1369-
/* STX XADD: lock *(u32 *)(dst + off) += src */
1370-
case BPF_STX | BPF_XADD | BPF_W: {
1369+
case BPF_STX | BPF_ATOMIC | BPF_W: {
13711370
const u8 tmp = bpf2sparc[TMP_REG_1];
13721371
const u8 tmp2 = bpf2sparc[TMP_REG_2];
13731372
const u8 tmp3 = bpf2sparc[TMP_REG_3];
13741373

1374+
if (insn->imm != BPF_ADD) {
1375+
pr_err_once("unknown atomic op %02x\n", insn->imm);
1376+
return -EINVAL;
1377+
}
1378+
1379+
/* lock *(u32 *)(dst + off) += src */
1380+
13751381
if (insn->dst_reg == BPF_REG_FP)
13761382
ctx->saw_frame_pointer = true;
13771383

@@ -1390,11 +1396,16 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
13901396
break;
13911397
}
13921398
/* STX XADD: lock *(u64 *)(dst + off) += src */
1393-
case BPF_STX | BPF_XADD | BPF_DW: {
1399+
case BPF_STX | BPF_ATOMIC | BPF_DW: {
13941400
const u8 tmp = bpf2sparc[TMP_REG_1];
13951401
const u8 tmp2 = bpf2sparc[TMP_REG_2];
13961402
const u8 tmp3 = bpf2sparc[TMP_REG_3];
13971403

1404+
if (insn->imm != BPF_ADD) {
1405+
pr_err_once("unknown atomic op %02x\n", insn->imm);
1406+
return -EINVAL;
1407+
}
1408+
13981409
if (insn->dst_reg == BPF_REG_FP)
13991410
ctx->saw_frame_pointer = true;
14001411

0 commit comments

Comments
 (0)