Skip to content

Commit 06096d1

Browse files
anakryikoAlexei Starovoitov
authored andcommitted
libbpf: fix LDX/STX/ST CO-RE relocation size adjustment logic
Libbpf has a somewhat obscure feature of automatically adjusting the "size" of LDX/STX/ST instruction (memory store and load instructions), based on originally recorded access size (u8, u16, u32, or u64) and the actual size of the field on target kernel. This is meant to facilitate using BPF CO-RE on 32-bit architectures (pointers are always 64-bit in BPF, but host kernel's BTF will have it as 32-bit type), as well as generally supporting safe type changes (unsigned integer type changes can be transparently "relocated"). One issue that surfaced only now, 5 years after this logic was implemented, is how this all works when dealing with fields that are arrays. This isn't all that easy and straightforward to hit (see selftests that reproduce this condition), but one of sched_ext BPF programs did hit it with innocent looking loop. Long story short, libbpf used to calculate entire array size, instead of making sure to only calculate array's element size. But it's the element that is loaded by LDX/STX/ST instructions (1, 2, 4, or 8 bytes), so that's what libbpf should check. This patch adjusts the logic for arrays and fixed the issue. Reported-by: Emil Tsalapatis <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Eduard Zingerman <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
1 parent 772b9b1 commit 06096d1

File tree

1 file changed

+20
-4
lines changed

1 file changed

+20
-4
lines changed

tools/lib/bpf/relo_core.c

Lines changed: 20 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -683,7 +683,7 @@ static int bpf_core_calc_field_relo(const char *prog_name,
683683
{
684684
const struct bpf_core_accessor *acc;
685685
const struct btf_type *t;
686-
__u32 byte_off, byte_sz, bit_off, bit_sz, field_type_id;
686+
__u32 byte_off, byte_sz, bit_off, bit_sz, field_type_id, elem_id;
687687
const struct btf_member *m;
688688
const struct btf_type *mt;
689689
bool bitfield;
@@ -706,8 +706,14 @@ static int bpf_core_calc_field_relo(const char *prog_name,
706706
if (!acc->name) {
707707
if (relo->kind == BPF_CORE_FIELD_BYTE_OFFSET) {
708708
*val = spec->bit_offset / 8;
709-
/* remember field size for load/store mem size */
710-
sz = btf__resolve_size(spec->btf, acc->type_id);
709+
/* remember field size for load/store mem size;
710+
* note, for arrays we care about individual element
711+
* sizes, not the overall array size
712+
*/
713+
t = skip_mods_and_typedefs(spec->btf, acc->type_id, &elem_id);
714+
while (btf_is_array(t))
715+
t = skip_mods_and_typedefs(spec->btf, btf_array(t)->type, &elem_id);
716+
sz = btf__resolve_size(spec->btf, elem_id);
711717
if (sz < 0)
712718
return -EINVAL;
713719
*field_sz = sz;
@@ -767,7 +773,17 @@ static int bpf_core_calc_field_relo(const char *prog_name,
767773
case BPF_CORE_FIELD_BYTE_OFFSET:
768774
*val = byte_off;
769775
if (!bitfield) {
770-
*field_sz = byte_sz;
776+
/* remember field size for load/store mem size;
777+
* note, for arrays we care about individual element
778+
* sizes, not the overall array size
779+
*/
780+
t = skip_mods_and_typedefs(spec->btf, field_type_id, &elem_id);
781+
while (btf_is_array(t))
782+
t = skip_mods_and_typedefs(spec->btf, btf_array(t)->type, &elem_id);
783+
sz = btf__resolve_size(spec->btf, elem_id);
784+
if (sz < 0)
785+
return -EINVAL;
786+
*field_sz = sz;
771787
*type_id = field_type_id;
772788
}
773789
break;

0 commit comments

Comments
 (0)