Skip to content

Commit 665d495

Browse files
adam900710kdave
authored andcommitted
btrfs: scrub: Don't use inode page cache in scrub_handle_errored_block()
In commit ac0b414 ("btrfs: scrub: Don't use inode pages for device replace") we removed the branch of copy_nocow_pages() to avoid corruption for compressed nodatasum extents. However above commit only solves the problem in scrub_extent(), if during scrub_pages() we failed to read some pages, sctx->no_io_error_seen will be non-zero and we go to fixup function scrub_handle_errored_block(). In scrub_handle_errored_block(), for sctx without csum (no matter if we're doing replace or scrub) we go to scrub_fixup_nodatasum() routine, which does the similar thing with copy_nocow_pages(), but does it without the extra check in copy_nocow_pages() routine. So for test cases like btrfs/100, where we emulate read errors during replace/scrub, we could corrupt compressed extent data again. This patch will fix it just by avoiding any "optimization" for nodatasum, just falls back to the normal fixup routine by try read from any good copy. This also solves WARN_ON() or dead lock caused by lame backref iteration in scrub_fixup_nodatasum() routine. The deadlock or WARN_ON() won't be triggered before commit ac0b414 ("btrfs: scrub: Don't use inode pages for device replace") since copy_nocow_pages() have better locking and extra check for data extent, and it's already doing the fixup work by try to read data from any good copy, so it won't go scrub_fixup_nodatasum() anyway. This patch disables the faulty code and will be removed completely in a followup patch. Fixes: ac0b414 ("btrfs: scrub: Don't use inode pages for device replace") Signed-off-by: Qu Wenruo <[email protected]> Signed-off-by: David Sterba <[email protected]>
1 parent 97b1917 commit 665d495

File tree

1 file changed

+9
-8
lines changed

1 file changed

+9
-8
lines changed

fs/btrfs/scrub.c

Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1151,11 +1151,6 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
11511151
return ret;
11521152
}
11531153

1154-
if (sctx->is_dev_replace && !is_metadata && !have_csum) {
1155-
sblocks_for_recheck = NULL;
1156-
goto nodatasum_case;
1157-
}
1158-
11591154
/*
11601155
* read all mirrors one after the other. This includes to
11611156
* re-read the extent or metadata block that failed (that was
@@ -1268,13 +1263,19 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
12681263
goto out;
12691264
}
12701265

1271-
if (!is_metadata && !have_csum) {
1266+
/*
1267+
* NOTE: Even for nodatasum case, it's still possible that it's a
1268+
* compressed data extent, thus scrub_fixup_nodatasum(), which write
1269+
* inode page cache onto disk, could cause serious data corruption.
1270+
*
1271+
* So here we could only read from disk, and hope our recovery could
1272+
* reach disk before the newer write.
1273+
*/
1274+
if (0 && !is_metadata && !have_csum) {
12721275
struct scrub_fixup_nodatasum *fixup_nodatasum;
12731276

12741277
WARN_ON(sctx->is_dev_replace);
12751278

1276-
nodatasum_case:
1277-
12781279
/*
12791280
* !is_metadata and !have_csum, this means that the data
12801281
* might not be COWed, that it might be modified

0 commit comments

Comments
 (0)