Skip to content

Commit 60490e7

Browse files
haoren3696Peter Zijlstra
authored andcommitted
perf/core: Fix perf_mmap fail when CONFIG_PERF_USE_VMALLOC enabled
This problem can be reproduced with CONFIG_PERF_USE_VMALLOC enabled on both x86_64 and aarch64 arch when using sysdig -B(using ebpf)[1]. sysdig -B works fine after rebuilding the kernel with CONFIG_PERF_USE_VMALLOC disabled. I tracked it down to the if condition event->rb->nr_pages != nr_pages in perf_mmap is true when CONFIG_PERF_USE_VMALLOC is enabled where event->rb->nr_pages = 1 and nr_pages = 2048 resulting perf_mmap to return -EINVAL. This is because when CONFIG_PERF_USE_VMALLOC is enabled, rb->nr_pages is always equal to 1. Arch with CONFIG_PERF_USE_VMALLOC enabled by default: arc/arm/csky/mips/sh/sparc/xtensa Arch with CONFIG_PERF_USE_VMALLOC disabled by default: x86_64/aarch64/... Fix this problem by using data_page_nr() [1] https://github.com/draios/sysdig Fixes: 906010b ("perf_event: Provide vmalloc() based mmap() backing") Signed-off-by: Zhipeng Xie <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
1 parent b2d229d commit 60490e7

File tree

3 files changed

+6
-6
lines changed

3 files changed

+6
-6
lines changed

kernel/events/core.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6247,7 +6247,7 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
62476247
again:
62486248
mutex_lock(&event->mmap_mutex);
62496249
if (event->rb) {
6250-
if (event->rb->nr_pages != nr_pages) {
6250+
if (data_page_nr(event->rb) != nr_pages) {
62516251
ret = -EINVAL;
62526252
goto unlock;
62536253
}

kernel/events/internal.h

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -116,6 +116,11 @@ static inline int page_order(struct perf_buffer *rb)
116116
}
117117
#endif
118118

119+
static inline int data_page_nr(struct perf_buffer *rb)
120+
{
121+
return rb->nr_pages << page_order(rb);
122+
}
123+
119124
static inline unsigned long perf_data_size(struct perf_buffer *rb)
120125
{
121126
return rb->nr_pages << (PAGE_SHIFT + page_order(rb));

kernel/events/ring_buffer.c

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -859,11 +859,6 @@ void rb_free(struct perf_buffer *rb)
859859
}
860860

861861
#else
862-
static int data_page_nr(struct perf_buffer *rb)
863-
{
864-
return rb->nr_pages << page_order(rb);
865-
}
866-
867862
static struct page *
868863
__perf_mmap_to_page(struct perf_buffer *rb, unsigned long pgoff)
869864
{

0 commit comments

Comments
 (0)