1 ============================
2 Transparent Hugepage Support
3 ============================
5 This document describes design principles for Transparent Hugepage (THP)
6 support and its interaction with other parts of the memory management
12 - "graceful fallback": mm components which don't have transparent hugepage
13 knowledge fall back to breaking huge pmd mapping into table of ptes and,
14 if necessary, split a transparent hugepage. Therefore these components
15 can continue working on the regular pages or regular pte mappings.
17 - if a hugepage allocation fails because of memory fragmentation,
18 regular pages should be gracefully allocated instead and mixed in
19 the same vma without any failure or significant delay and without
22 - if some task quits and more hugepages become available (either
23 immediately in the buddy or through the VM), guest physical memory
24 backed by regular pages should be relocated on hugepages
25 automatically (with khugepaged)
27 - it doesn't require memory reservation and in turn it uses hugepages
28 whenever possible (the only possible reservation here is kernelcore=
29 to avoid unmovable pages to fragment all the memory but such a tweak
30 is not specific to transparent hugepage support and it's a generic
31 feature that applies to all dynamic high order allocations in the
34 get_user_pages and pin_user_pages
35 =================================
37 get_user_pages and pin_user_pages if run on a hugepage, will return the
38 head or tail pages as usual (exactly as they would do on
39 hugetlbfs). Most GUP users will only care about the actual physical
40 address of the page and its temporary pinning to release after the I/O
41 is complete, so they won't ever notice the fact the page is huge. But
42 if any driver is going to mangle over the page structure of the tail
43 page (like for checking page->mapping or other bits that are relevant
44 for the head page and not the tail page), it should be updated to jump
45 to check head page instead. Taking a reference on any head/tail page would
46 prevent the page from being split by anyone.
49 these aren't new constraints to the GUP API, and they match the
50 same constraints that apply to hugetlbfs too, so any driver capable
51 of handling GUP on hugetlbfs will also work fine on transparent
52 hugepage backed mappings.
57 Code walking pagetables but unaware about huge pmds can simply call
58 split_huge_pmd(vma, pmd, addr) where the pmd is the one returned by
59 pmd_offset. It's trivial to make the code transparent hugepage aware
60 by just grepping for "pmd_offset" and adding split_huge_pmd where
61 missing after pmd_offset returns the pmd. Thanks to the graceful
62 fallback design, with a one liner change, you can avoid to write
63 hundreds if not thousands of lines of complex code to make your code
66 If you're not walking pagetables but you run into a physical hugepage
67 that you can't handle natively in your code, you can split it by
68 calling split_huge_page(page). This is what the Linux VM does before
69 it tries to swapout the hugepage for example. split_huge_page() can fail
70 if the page is pinned and you must handle this correctly.
72 Example to make mremap.c transparent hugepage aware with a one liner
75 diff --git a/mm/mremap.c b/mm/mremap.c
78 @@ -41,6 +41,7 @@ static pmd_t *get_old_pmd(struct mm_stru
81 pmd = pmd_offset(pud, addr);
82 + split_huge_pmd(vma, pmd, addr);
83 if (pmd_none_or_clear_bad(pmd))
86 Locking in hugepage aware code
87 ==============================
89 We want as much code as possible hugepage aware, as calling
90 split_huge_page() or split_huge_pmd() has a cost.
92 To make pagetable walks huge pmd aware, all you need to do is to call
93 pmd_trans_huge() on the pmd returned by pmd_offset. You must hold the
94 mmap_lock in read (or write) mode to be sure a huge pmd cannot be
95 created from under you by khugepaged (khugepaged collapse_huge_page
96 takes the mmap_lock in write mode in addition to the anon_vma lock). If
97 pmd_trans_huge returns false, you just fallback in the old code
98 paths. If instead pmd_trans_huge returns true, you have to take the
99 page table lock (pmd_lock()) and re-run pmd_trans_huge. Taking the
100 page table lock will prevent the huge pmd being converted into a
101 regular pmd from under you (split_huge_pmd can run in parallel to the
102 pagetable walk). If the second pmd_trans_huge returns false, you
103 should just drop the page table lock and fallback to the old code as
104 before. Otherwise, you can proceed to process the huge pmd and the
105 hugepage natively. Once finished, you can drop the page table lock.
107 Refcounts and transparent huge pages
108 ====================================
110 Refcounting on THP is mostly consistent with refcounting on other compound
113 - get_page()/put_page() and GUP operate on the folio->_refcount.
115 - ->_refcount in tail pages is always zero: get_page_unless_zero() never
116 succeeds on tail pages.
118 - map/unmap of a PMD entry for the whole THP increment/decrement
119 folio->_entire_mapcount, increment/decrement folio->_large_mapcount
120 and also increment/decrement folio->_nr_pages_mapped by ENTIRELY_MAPPED
121 when _entire_mapcount goes from -1 to 0 or 0 to -1.
123 - map/unmap of individual pages with PTE entry increment/decrement
124 page->_mapcount, increment/decrement folio->_large_mapcount and also
125 increment/decrement folio->_nr_pages_mapped when page->_mapcount goes
126 from -1 to 0 or 0 to -1 as this counts the number of pages mapped by PTE.
128 split_huge_page internally has to distribute the refcounts in the head
129 page to the tail pages before clearing all PG_head/tail bits from the page
130 structures. It can be done easily for refcounts taken by page table
131 entries, but we don't have enough information on how to distribute any
132 additional pins (i.e. from get_user_pages). split_huge_page() fails any
133 requests to split pinned huge pages: it expects page count to be equal to
134 the sum of mapcount of all sub-pages plus one (split_huge_page caller must
135 have a reference to the head page).
137 split_huge_page uses migration entries to stabilize page->_refcount and
138 page->_mapcount of anonymous pages. File pages just get unmapped.
140 We are safe against physical memory scanners too: the only legitimate way
141 a scanner can get a reference to a page is get_page_unless_zero().
143 All tail pages have zero ->_refcount until atomic_add(). This prevents the
144 scanner from getting a reference to the tail page up to that point. After the
145 atomic_add() we don't care about the ->_refcount value. We already know how
146 many references should be uncharged from the head page.
148 For head page get_page_unless_zero() will succeed and we don't mind. It's
149 clear where references should go after split: it will stay on the head page.
151 Note that split_huge_pmd() doesn't have any limitations on refcounting:
152 pmd can be split at any point and never fails.
154 Partial unmap and deferred_split_folio()
155 ========================================
157 Unmapping part of THP (with munmap() or other way) is not going to free
158 memory immediately. Instead, we detect that a subpage of THP is not in use
159 in folio_remove_rmap_*() and queue the THP for splitting if memory pressure
160 comes. Splitting will free up unused subpages.
162 Splitting the page right away is not an option due to locking context in
163 the place where we can detect partial unmap. It also might be
164 counterproductive since in many cases partial unmap happens during exit(2) if
165 a THP crosses a VMA boundary.
167 The function deferred_split_folio() is used to queue a folio for splitting.
168 The splitting itself will happen when we get memory pressure via shrinker