Skip to content

Commit 758f2df

Browse files
fdmananamasoncl
authored andcommitted
Btrfs: fix scrub preventing unused block groups from being deleted
Currently scrub can race with the cleaner kthread when the later attempts to delete an unused block group, and the result is preventing the cleaner kthread from ever deleting later the block group - unless the block group becomes used and unused again. The following diagram illustrates that race: CPU 1 CPU 2 cleaner kthread btrfs_delete_unused_bgs() gets block group X from fs_info->unused_bgs and removes it from that list scrub_enumerate_chunks() searches device tree using its commit root finds device extent for block group X gets block group X from the tree fs_info->block_group_cache_tree (via btrfs_lookup_block_group()) sets bg X to RO sees the block group is already RO and therefore doesn't delete it nor adds it back to unused list So fix this by making scrub add the block group again to the list of unused block groups if the block group is still unused when it finished scrubbing it and it hasn't been removed already. Signed-off-by: Filipe Manana <[email protected]> Signed-off-by: Chris Mason <[email protected]>
1 parent 020d5b7 commit 758f2df

File tree

3 files changed

+24
-1
lines changed

3 files changed

+24
-1
lines changed

fs/btrfs/ctree.h

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3416,6 +3416,7 @@ int btrfs_cross_ref_exist(struct btrfs_trans_handle *trans,
34163416
struct btrfs_block_group_cache *btrfs_lookup_block_group(
34173417
struct btrfs_fs_info *info,
34183418
u64 bytenr);
3419+
void btrfs_get_block_group(struct btrfs_block_group_cache *cache);
34193420
void btrfs_put_block_group(struct btrfs_block_group_cache *cache);
34203421
int get_block_group_index(struct btrfs_block_group_cache *cache);
34213422
struct extent_buffer *btrfs_alloc_tree_block(struct btrfs_trans_handle *trans,

fs/btrfs/extent-tree.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -124,7 +124,7 @@ static int block_group_bits(struct btrfs_block_group_cache *cache, u64 bits)
124124
return (cache->flags & bits) == bits;
125125
}
126126

127-
static void btrfs_get_block_group(struct btrfs_block_group_cache *cache)
127+
void btrfs_get_block_group(struct btrfs_block_group_cache *cache)
128128
{
129129
atomic_inc(&cache->count);
130130
}

fs/btrfs/scrub.c

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3641,6 +3641,28 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
36413641
if (ro_set)
36423642
btrfs_dec_block_group_ro(root, cache);
36433643

3644+
/*
3645+
* We might have prevented the cleaner kthread from deleting
3646+
* this block group if it was already unused because we raced
3647+
* and set it to RO mode first. So add it back to the unused
3648+
* list, otherwise it might not ever be deleted unless a manual
3649+
* balance is triggered or it becomes used and unused again.
3650+
*/
3651+
spin_lock(&cache->lock);
3652+
if (!cache->removed && !cache->ro && cache->reserved == 0 &&
3653+
btrfs_block_group_used(&cache->item) == 0) {
3654+
spin_unlock(&cache->lock);
3655+
spin_lock(&fs_info->unused_bgs_lock);
3656+
if (list_empty(&cache->bg_list)) {
3657+
btrfs_get_block_group(cache);
3658+
list_add_tail(&cache->bg_list,
3659+
&fs_info->unused_bgs);
3660+
}
3661+
spin_unlock(&fs_info->unused_bgs_lock);
3662+
} else {
3663+
spin_unlock(&cache->lock);
3664+
}
3665+
36443666
btrfs_put_block_group(cache);
36453667
if (ret)
36463668
break;

0 commit comments

Comments
 (0)