Skip to content

Commit be50015

Browse files
Matthew Wilcoxtorvalds
authored andcommitted
mm: document how to use struct page
Be really explicit about what bits / bytes are reserved for users that want to store extra information about the pages they allocate. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox <[email protected]> Acked-by: Kirill A. Shutemov <[email protected]> Reviewed-by: Randy Dunlap <[email protected]> Acked-by: Michal Hocko <[email protected]> Cc: Christoph Lameter <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent 036e7aa commit be50015

File tree

1 file changed

+23
-1
lines changed

1 file changed

+23
-1
lines changed

include/linux/mm_types.h

Lines changed: 23 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,29 @@ struct hmm;
3131
* it to keep track of whatever it is we are using the page for at the
3232
* moment. Note that we have no way to track which tasks are using
3333
* a page, though if it is a pagecache page, rmap structures can tell us
34-
* who is mapping it.
34+
* who is mapping it. If you allocate the page using alloc_pages(), you
35+
* can use some of the space in struct page for your own purposes.
36+
*
37+
* Pages that were once in the page cache may be found under the RCU lock
38+
* even after they have been recycled to a different purpose. The page
39+
* cache reads and writes some of the fields in struct page to pin the
40+
* page before checking that it's still in the page cache. It is vital
41+
* that all users of struct page:
42+
* 1. Use the first word as PageFlags.
43+
* 2. Clear or preserve bit 0 of page->compound_head. It is used as
44+
* PageTail for compound pages, and the page cache must not see false
45+
* positives. Some users put a pointer here (guaranteed to be at least
46+
* 4-byte aligned), other users avoid using the field altogether.
47+
* 3. page->_refcount must either not be used, or must be used in such a
48+
* way that other CPUs temporarily incrementing and then decrementing the
49+
* refcount does not cause problems. On receiving the page from
50+
* alloc_pages(), the refcount will be positive.
51+
* 4. Either preserve page->_mapcount or restore it to -1 before freeing it.
52+
*
53+
* If you allocate pages of order > 0, you can use the fields in the struct
54+
* page associated with each page, but bear in mind that the pages may have
55+
* been inserted individually into the page cache, so you must use the above
56+
* four fields in a compatible way for each struct page.
3557
*
3658
* SLUB uses cmpxchg_double() to atomically update its freelist and
3759
* counters. That requires that freelist & counters be adjacent and

0 commit comments

Comments
 (0)