Skip to content

Commit 9852a72

Browse files
Michal Hockotorvalds
authored andcommitted
mm: drop hotplug lock from lru_add_drain_all()
Pulling cpu hotplug locks inside the mm core function like lru_add_drain_all just asks for problems and the recent lockdep splat [1] just proves this. While the usage in that particular case might be wrong we should avoid the locking as lru_add_drain_all() is used in many places. It seems that this is not all that hard to achieve actually. We have done the same thing for drain_all_pages which is analogous by commit a459eeb ("mm, page_alloc: do not depend on cpu hotplug locks inside the allocator"). All we have to care about is to handle - the work item might be executed on a different cpu in worker from unbound pool so it doesn't run on pinned on the cpu - we have to make sure that we do not race with page_alloc_cpu_dead calling lru_add_drain_cpu the first part is already handled because the worker calls lru_add_drain which disables preemption when calling lru_add_drain_cpu on the local cpu it is draining. The later is true because page_alloc_cpu_dead is called on the controlling CPU after the hotplugged CPU vanished completely. [1] http://lkml.kernel.org/r/[email protected] [add a cpu hotplug locking interaction as per tglx] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Michal Hocko <[email protected]> Acked-by: Thomas Gleixner <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Mel Gorman <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent 0486a38 commit 9852a72

File tree

3 files changed

+9
-10
lines changed

3 files changed

+9
-10
lines changed

include/linux/swap.h

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -332,7 +332,6 @@ extern void mark_page_accessed(struct page *);
332332
extern void lru_add_drain(void);
333333
extern void lru_add_drain_cpu(int cpu);
334334
extern void lru_add_drain_all(void);
335-
extern void lru_add_drain_all_cpuslocked(void);
336335
extern void rotate_reclaimable_page(struct page *page);
337336
extern void deactivate_file_page(struct page *page);
338337
extern void mark_page_lazyfree(struct page *page);

mm/memory_hotplug.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1637,7 +1637,7 @@ static int __ref __offline_pages(unsigned long start_pfn,
16371637
goto failed_removal;
16381638

16391639
cond_resched();
1640-
lru_add_drain_all_cpuslocked();
1640+
lru_add_drain_all();
16411641
drain_all_pages(zone);
16421642

16431643
pfn = scan_movable_pages(start_pfn, end_pfn);

mm/swap.c

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -688,7 +688,14 @@ static void lru_add_drain_per_cpu(struct work_struct *dummy)
688688

689689
static DEFINE_PER_CPU(struct work_struct, lru_add_drain_work);
690690

691-
void lru_add_drain_all_cpuslocked(void)
691+
/*
692+
* Doesn't need any cpu hotplug locking because we do rely on per-cpu
693+
* kworkers being shut down before our page_alloc_cpu_dead callback is
694+
* executed on the offlined cpu.
695+
* Calling this function with cpu hotplug locks held can actually lead
696+
* to obscure indirect dependencies via WQ context.
697+
*/
698+
void lru_add_drain_all(void)
692699
{
693700
static DEFINE_MUTEX(lock);
694701
static struct cpumask has_work;
@@ -724,13 +731,6 @@ void lru_add_drain_all_cpuslocked(void)
724731
mutex_unlock(&lock);
725732
}
726733

727-
void lru_add_drain_all(void)
728-
{
729-
get_online_cpus();
730-
lru_add_drain_all_cpuslocked();
731-
put_online_cpus();
732-
}
733-
734734
/**
735735
* release_pages - batched put_page()
736736
* @pages: array of pages to release

0 commit comments

Comments
 (0)