Skip to content

Commit 68ed2a3

Browse files
jankaraakpm00
authored andcommitted
mm: avoid overflows in dirty throttling logic
The dirty throttling logic is interspersed with assumptions that dirty limits in PAGE_SIZE units fit into 32-bit (so that various multiplications fit into 64-bits). If limits end up being larger, we will hit overflows, possible divisions by 0 etc. Fix these problems by never allowing so large dirty limits as they have dubious practical value anyway. For dirty_bytes / dirty_background_bytes interfaces we can just refuse to set so large limits. For dirty_ratio / dirty_background_ratio it isn't so simple as the dirty limit is computed from the amount of available memory which can change due to memory hotplug etc. So when converting dirty limits from ratios to numbers of pages, we just don't allow the result to exceed UINT_MAX. This is root-only triggerable problem which occurs when the operator sets dirty limits to >16 TB. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Jan Kara <[email protected]> Reported-by: Zach O'Keefe <[email protected]> Reviewed-By: Zach O'Keefe <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
1 parent 8dfcffa commit 68ed2a3

File tree

1 file changed

+26
-4
lines changed

1 file changed

+26
-4
lines changed

mm/page-writeback.c

Lines changed: 26 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -417,13 +417,20 @@ static void domain_dirty_limits(struct dirty_throttle_control *dtc)
417417
else
418418
bg_thresh = (bg_ratio * available_memory) / PAGE_SIZE;
419419

420-
if (bg_thresh >= thresh)
421-
bg_thresh = thresh / 2;
422420
tsk = current;
423421
if (rt_task(tsk)) {
424422
bg_thresh += bg_thresh / 4 + global_wb_domain.dirty_limit / 32;
425423
thresh += thresh / 4 + global_wb_domain.dirty_limit / 32;
426424
}
425+
/*
426+
* Dirty throttling logic assumes the limits in page units fit into
427+
* 32-bits. This gives 16TB dirty limits max which is hopefully enough.
428+
*/
429+
if (thresh > UINT_MAX)
430+
thresh = UINT_MAX;
431+
/* This makes sure bg_thresh is within 32-bits as well */
432+
if (bg_thresh >= thresh)
433+
bg_thresh = thresh / 2;
427434
dtc->thresh = thresh;
428435
dtc->bg_thresh = bg_thresh;
429436

@@ -473,7 +480,11 @@ static unsigned long node_dirty_limit(struct pglist_data *pgdat)
473480
if (rt_task(tsk))
474481
dirty += dirty / 4;
475482

476-
return dirty;
483+
/*
484+
* Dirty throttling logic assumes the limits in page units fit into
485+
* 32-bits. This gives 16TB dirty limits max which is hopefully enough.
486+
*/
487+
return min_t(unsigned long, dirty, UINT_MAX);
477488
}
478489

479490
/**
@@ -510,10 +521,17 @@ static int dirty_background_bytes_handler(struct ctl_table *table, int write,
510521
void *buffer, size_t *lenp, loff_t *ppos)
511522
{
512523
int ret;
524+
unsigned long old_bytes = dirty_background_bytes;
513525

514526
ret = proc_doulongvec_minmax(table, write, buffer, lenp, ppos);
515-
if (ret == 0 && write)
527+
if (ret == 0 && write) {
528+
if (DIV_ROUND_UP(dirty_background_bytes, PAGE_SIZE) >
529+
UINT_MAX) {
530+
dirty_background_bytes = old_bytes;
531+
return -ERANGE;
532+
}
516533
dirty_background_ratio = 0;
534+
}
517535
return ret;
518536
}
519537

@@ -539,6 +557,10 @@ static int dirty_bytes_handler(struct ctl_table *table, int write,
539557

540558
ret = proc_doulongvec_minmax(table, write, buffer, lenp, ppos);
541559
if (ret == 0 && write && vm_dirty_bytes != old_bytes) {
560+
if (DIV_ROUND_UP(vm_dirty_bytes, PAGE_SIZE) > UINT_MAX) {
561+
vm_dirty_bytes = old_bytes;
562+
return -ERANGE;
563+
}
542564
writeback_set_ratelimit();
543565
vm_dirty_ratio = 0;
544566
}

0 commit comments

Comments
 (0)