Skip to content

Commit 41d0c46

Browse files
yunwei37anakryiko
authored andcommitted
libbpf: Fix some typos in comments
Fix some spelling errors in the code comments of libbpf: betwen -> between paremeters -> parameters knowning -> knowing definiton -> definition compatiblity -> compatibility overriden -> overridden occured -> occurred proccess -> process managment -> management nessary -> necessary Signed-off-by: Yusheng Zheng <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
1 parent 72d8508 commit 41d0c46

File tree

8 files changed

+13
-13
lines changed

8 files changed

+13
-13
lines changed

tools/lib/bpf/bpf_helpers.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -341,7 +341,7 @@ extern void bpf_iter_num_destroy(struct bpf_iter_num *it) __weak __ksym;
341341
* I.e., it looks almost like high-level for each loop in other languages,
342342
* supports continue/break, and is verifiable by BPF verifier.
343343
*
344-
* For iterating integers, the difference betwen bpf_for_each(num, i, N, M)
344+
* For iterating integers, the difference between bpf_for_each(num, i, N, M)
345345
* and bpf_for(i, N, M) is in that bpf_for() provides additional proof to
346346
* verifier that i is in [N, M) range, and in bpf_for_each() case i is `int
347347
* *`, not just `int`. So for integers bpf_for() is more convenient.

tools/lib/bpf/bpf_tracing.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -808,7 +808,7 @@ struct pt_regs;
808808
* tp_btf/fentry/fexit BPF programs. It hides the underlying platform-specific
809809
* low-level way of getting kprobe input arguments from struct pt_regs, and
810810
* provides a familiar typed and named function arguments syntax and
811-
* semantics of accessing kprobe input paremeters.
811+
* semantics of accessing kprobe input parameters.
812812
*
813813
* Original struct pt_regs* context is preserved as 'ctx' argument. This might
814814
* be necessary when using BPF helpers like bpf_perf_event_output().

tools/lib/bpf/btf.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4230,7 +4230,7 @@ static bool btf_dedup_identical_structs(struct btf_dedup *d, __u32 id1, __u32 id
42304230
* consists of portions of the graph that come from multiple compilation units.
42314231
* This is due to the fact that types within single compilation unit are always
42324232
* deduplicated and FWDs are already resolved, if referenced struct/union
4233-
* definiton is available. So, if we had unresolved FWD and found corresponding
4233+
* definition is available. So, if we had unresolved FWD and found corresponding
42344234
* STRUCT/UNION, they will be from different compilation units. This
42354235
* consequently means that when we "link" FWD to corresponding STRUCT/UNION,
42364236
* type graph will likely have at least two different BTF types that describe

tools/lib/bpf/btf.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -286,7 +286,7 @@ LIBBPF_API void btf_dump__free(struct btf_dump *d);
286286
LIBBPF_API int btf_dump__dump_type(struct btf_dump *d, __u32 id);
287287

288288
struct btf_dump_emit_type_decl_opts {
289-
/* size of this struct, for forward/backward compatiblity */
289+
/* size of this struct, for forward/backward compatibility */
290290
size_t sz;
291291
/* optional field name for type declaration, e.g.:
292292
* - struct my_struct <FNAME>

tools/lib/bpf/btf_dump.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -304,7 +304,7 @@ int btf_dump__dump_type(struct btf_dump *d, __u32 id)
304304
* definition, in which case they have to be declared inline as part of field
305305
* type declaration; or as a top-level anonymous enum, typically used for
306306
* declaring global constants. It's impossible to distinguish between two
307-
* without knowning whether given enum type was referenced from other type:
307+
* without knowing whether given enum type was referenced from other type:
308308
* top-level anonymous enum won't be referenced by anything, while embedded
309309
* one will.
310310
*/

tools/lib/bpf/libbpf.h

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -152,7 +152,7 @@ struct bpf_object_open_opts {
152152
* log_buf and log_level settings.
153153
*
154154
* If specified, this log buffer will be passed for:
155-
* - each BPF progral load (BPF_PROG_LOAD) attempt, unless overriden
155+
* - each BPF progral load (BPF_PROG_LOAD) attempt, unless overridden
156156
* with bpf_program__set_log() on per-program level, to get
157157
* BPF verifier log output.
158158
* - during BPF object's BTF load into kernel (BPF_BTF_LOAD) to get
@@ -455,7 +455,7 @@ LIBBPF_API int bpf_link__destroy(struct bpf_link *link);
455455
/**
456456
* @brief **bpf_program__attach()** is a generic function for attaching
457457
* a BPF program based on auto-detection of program type, attach type,
458-
* and extra paremeters, where applicable.
458+
* and extra parameters, where applicable.
459459
*
460460
* @param prog BPF program to attach
461461
* @return Reference to the newly created BPF link; or NULL is returned on error,
@@ -679,7 +679,7 @@ struct bpf_uprobe_opts {
679679
/**
680680
* @brief **bpf_program__attach_uprobe()** attaches a BPF program
681681
* to the userspace function which is found by binary path and
682-
* offset. You can optionally specify a particular proccess to attach
682+
* offset. You can optionally specify a particular process to attach
683683
* to. You can also optionally attach the program to the function
684684
* exit instead of entry.
685685
*
@@ -1593,11 +1593,11 @@ LIBBPF_API int perf_buffer__buffer_fd(const struct perf_buffer *pb, size_t buf_i
15931593
* memory region of the ring buffer.
15941594
* This ring buffer can be used to implement a custom events consumer.
15951595
* The ring buffer starts with the *struct perf_event_mmap_page*, which
1596-
* holds the ring buffer managment fields, when accessing the header
1596+
* holds the ring buffer management fields, when accessing the header
15971597
* structure it's important to be SMP aware.
15981598
* You can refer to *perf_event_read_simple* for a simple example.
15991599
* @param pb the perf buffer structure
1600-
* @param buf_idx the buffer index to retreive
1600+
* @param buf_idx the buffer index to retrieve
16011601
* @param buf (out) gets the base pointer of the mmap()'ed memory
16021602
* @param buf_size (out) gets the size of the mmap()'ed region
16031603
* @return 0 on success, negative error code for failure

tools/lib/bpf/libbpf_legacy.h

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ enum libbpf_strict_mode {
7676
* first BPF program or map creation operation. This is done only if
7777
* kernel is too old to support memcg-based memory accounting for BPF
7878
* subsystem. By default, RLIMIT_MEMLOCK limit is set to RLIM_INFINITY,
79-
* but it can be overriden with libbpf_set_memlock_rlim() API.
79+
* but it can be overridden with libbpf_set_memlock_rlim() API.
8080
* Note that libbpf_set_memlock_rlim() needs to be called before
8181
* the very first bpf_prog_load(), bpf_map_create() or bpf_object__load()
8282
* operation.
@@ -97,7 +97,7 @@ LIBBPF_API int libbpf_set_strict_mode(enum libbpf_strict_mode mode);
9797
* @brief **libbpf_get_error()** extracts the error code from the passed
9898
* pointer
9999
* @param ptr pointer returned from libbpf API function
100-
* @return error code; or 0 if no error occured
100+
* @return error code; or 0 if no error occurred
101101
*
102102
* Note, as of libbpf 1.0 this function is not necessary and not recommended
103103
* to be used. Libbpf doesn't return error code embedded into the pointer

tools/lib/bpf/skel_internal.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -107,7 +107,7 @@ static inline void skel_free(const void *p)
107107
* The loader program will perform probe_read_kernel() from maps.rodata.initial_value.
108108
* skel_finalize_map_data() sets skel->rodata to point to actual value in a bpf map and
109109
* does maps.rodata.initial_value = ~0ULL to signal skel_free_map_data() that kvfree
110-
* is not nessary.
110+
* is not necessary.
111111
*
112112
* For user space:
113113
* skel_prep_map_data() mmaps anon memory into skel->rodata that can be accessed directly.

0 commit comments

Comments
 (0)