Skip to content

Commit 6746aff

Browse files
Wu FengguangAndi Kleen
authored andcommitted
HWPOISON: shmem: call set_page_dirty() with locked page
The dirtying of page and set_page_dirty() can be moved into the page lock. - In shmem_write_end(), the page was dirtied while the page lock was held, but it's being marked dirty just after dropping the page lock. - In shmem_symlink(), both dirtying and marking can be moved into page lock. It's valuable for the hwpoison code to know whether one bad page can be dropped without losing data. It mainly judges by testing the PG_dirty bit after taking the page lock. So it becomes important that the dirtying of page and the marking of dirtiness are both done inside the page lock. Which is a common practice, but sadly not a rule. The noticeable exceptions are - mapped pages - pages with buffer_heads The above pages could go dirty at any time. Fortunately the hwpoison will unmap the page and release the buffer_heads beforehand anyway. Many other types of pages (eg. metadata pages) can also be dirtied at will by their owners, the hwpoison code cannot do meaningful things to them anyway. Only the dirtiness of pagecache pages owned by regular files are interested. v2: AK: Add comment about set_page_dirty rules (suggested by Peter Zijlstra) Acked-by: Hugh Dickins <[email protected]> Reviewed-by: WANG Cong <[email protected]> Signed-off-by: Wu Fengguang <[email protected]> Signed-off-by: Andi Kleen <[email protected]>
1 parent 2571873 commit 6746aff

File tree

2 files changed

+9
-2
lines changed

2 files changed

+9
-2
lines changed

mm/page-writeback.c

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1149,6 +1149,13 @@ int redirty_page_for_writepage(struct writeback_control *wbc, struct page *page)
11491149
EXPORT_SYMBOL(redirty_page_for_writepage);
11501150

11511151
/*
1152+
* Dirty a page.
1153+
*
1154+
* For pages with a mapping this should be done under the page lock
1155+
* for the benefit of asynchronous memory errors who prefer a consistent
1156+
* dirty state. This rule can be broken in some special cases,
1157+
* but should be better not to.
1158+
*
11521159
* If the mapping doesn't provide a set_page_dirty a_op, then
11531160
* just fall through and assume that it wants buffer_heads.
11541161
*/

mm/shmem.c

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1630,8 +1630,8 @@ shmem_write_end(struct file *file, struct address_space *mapping,
16301630
if (pos + copied > inode->i_size)
16311631
i_size_write(inode, pos + copied);
16321632

1633-
unlock_page(page);
16341633
set_page_dirty(page);
1634+
unlock_page(page);
16351635
page_cache_release(page);
16361636

16371637
return copied;
@@ -1968,13 +1968,13 @@ static int shmem_symlink(struct inode *dir, struct dentry *dentry, const char *s
19681968
iput(inode);
19691969
return error;
19701970
}
1971-
unlock_page(page);
19721971
inode->i_mapping->a_ops = &shmem_aops;
19731972
inode->i_op = &shmem_symlink_inode_operations;
19741973
kaddr = kmap_atomic(page, KM_USER0);
19751974
memcpy(kaddr, symname, len);
19761975
kunmap_atomic(kaddr, KM_USER0);
19771976
set_page_dirty(page);
1977+
unlock_page(page);
19781978
page_cache_release(page);
19791979
}
19801980
if (dir->i_mode & S_ISGID)

0 commit comments

Comments
 (0)