Skip to content

Commit 008cfe4

Browse files
xzpetertorvalds
authored andcommitted
mm: Introduce mm_struct.has_pinned
(Commit message majorly collected from Jason Gunthorpe) Reduce the chance of false positive from page_maybe_dma_pinned() by keeping track if the mm_struct has ever been used with pin_user_pages(). This allows cases that might drive up the page ref_count to avoid any penalty from handling dma_pinned pages. Future work is planned, to provide a more sophisticated solution, likely to turn it into a real counter. For now, make it atomic_t but use it as a boolean for simplicity. Suggested-by: Jason Gunthorpe <[email protected]> Signed-off-by: Peter Xu <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent a1bffa4 commit 008cfe4

File tree

3 files changed

+17
-0
lines changed

3 files changed

+17
-0
lines changed

include/linux/mm_types.h

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -436,6 +436,16 @@ struct mm_struct {
436436
*/
437437
atomic_t mm_count;
438438

439+
/**
440+
* @has_pinned: Whether this mm has pinned any pages. This can
441+
* be either replaced in the future by @pinned_vm when it
442+
* becomes stable, or grow into a counter on its own. We're
443+
* aggresive on this bit now - even if the pinned pages were
444+
* unpinned later on, we'll still keep this bit set for the
445+
* lifecycle of this mm just for simplicity.
446+
*/
447+
atomic_t has_pinned;
448+
439449
#ifdef CONFIG_MMU
440450
atomic_long_t pgtables_bytes; /* PTE page table pages */
441451
#endif

kernel/fork.c

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1011,6 +1011,7 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
10111011
mm_pgtables_bytes_init(mm);
10121012
mm->map_count = 0;
10131013
mm->locked_vm = 0;
1014+
atomic_set(&mm->has_pinned, 0);
10141015
atomic64_set(&mm->pinned_vm, 0);
10151016
memset(&mm->rss_stat, 0, sizeof(mm->rss_stat));
10161017
spin_lock_init(&mm->page_table_lock);

mm/gup.c

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1255,6 +1255,9 @@ static __always_inline long __get_user_pages_locked(struct mm_struct *mm,
12551255
BUG_ON(*locked != 1);
12561256
}
12571257

1258+
if (flags & FOLL_PIN)
1259+
atomic_set(&current->mm->has_pinned, 1);
1260+
12581261
/*
12591262
* FOLL_PIN and FOLL_GET are mutually exclusive. Traditional behavior
12601263
* is to set FOLL_GET if the caller wants pages[] filled in (but has
@@ -2660,6 +2663,9 @@ static int internal_get_user_pages_fast(unsigned long start, int nr_pages,
26602663
FOLL_FAST_ONLY)))
26612664
return -EINVAL;
26622665

2666+
if (gup_flags & FOLL_PIN)
2667+
atomic_set(&current->mm->has_pinned, 1);
2668+
26632669
if (!(gup_flags & FOLL_FAST_ONLY))
26642670
might_lock_read(&current->mm->mmap_lock);
26652671

0 commit comments

Comments
 (0)