Skip to content

Commit 3c9d31c

Browse files
fdmananakdave
authored andcommitted
btrfs: add back missing dirty page rate limiting to defrag
A defrag operation can dirty a lot of pages, specially if operating on the entire file or a large file range. Any task dirtying pages should periodically call balance_dirty_pages_ratelimited(), as stated in that function's comments, otherwise they can leave too many dirty pages in the system. This is what we did before the refactoring in 5.16, and it should have remained, just like in the buffered write path and relocation. So restore that behaviour. Fixes: 7b50803 ("btrfs: defrag: use defrag_one_cluster() to implement btrfs_defrag_file()") CC: [email protected] # 5.16 Reviewed-by: Qu Wenruo <[email protected]> Signed-off-by: Filipe Manana <[email protected]> Signed-off-by: David Sterba <[email protected]>
1 parent 0cb5950 commit 3c9d31c

File tree

1 file changed

+5
-0
lines changed

1 file changed

+5
-0
lines changed

fs/btrfs/ioctl.c

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1579,6 +1579,7 @@ int btrfs_defrag_file(struct inode *inode, struct file_ra_state *ra,
15791579
}
15801580

15811581
while (cur < last_byte) {
1582+
const unsigned long prev_sectors_defragged = sectors_defragged;
15821583
u64 cluster_end;
15831584

15841585
/* The cluster size 256K should always be page aligned */
@@ -1610,6 +1611,10 @@ int btrfs_defrag_file(struct inode *inode, struct file_ra_state *ra,
16101611
cluster_end + 1 - cur, extent_thresh,
16111612
newer_than, do_compress,
16121613
&sectors_defragged, max_to_defrag);
1614+
1615+
if (sectors_defragged > prev_sectors_defragged)
1616+
balance_dirty_pages_ratelimited(inode->i_mapping);
1617+
16131618
btrfs_inode_unlock(inode, 0);
16141619
if (ret < 0)
16151620
break;

0 commit comments

Comments
 (0)