1 ============================
2 Transparent Hugepage Support
3 ============================
8 Performance critical computing applications dealing with large memory
9 working sets are already running on top of libhugetlbfs and in turn
10 hugetlbfs. Transparent HugePage Support (THP) is an alternative mean of
11 using huge pages for the backing of virtual memory with huge pages
12 that supports the automatic promotion and demotion of page sizes and
13 without the shortcomings of hugetlbfs.
15 Currently THP only works for anonymous memory mappings and tmpfs/shmem.
16 But in the future it can expand to other filesystems.
19 in the examples below we presume that the basic page size is 4K and
20 the huge page size is 2M, although the actual numbers may vary
21 depending on the CPU architecture.
23 The reason applications are running faster is because of two
24 factors. The first factor is almost completely irrelevant and it's not
25 of significant interest because it'll also have the downside of
26 requiring larger clear-page copy-page in page faults which is a
27 potentially negative effect. The first factor consists in taking a
28 single page fault for each 2M virtual region touched by userland (so
29 reducing the enter/exit kernel frequency by a 512 times factor). This
30 only matters the first time the memory is accessed for the lifetime of
31 a memory mapping. The second long lasting and much more important
32 factor will affect all subsequent accesses to the memory for the whole
33 runtime of the application. The second factor consist of two
36 1) the TLB miss will run faster (especially with virtualization using
37 nested pagetables but almost always also on bare metal without
40 2) a single TLB entry will be mapping a much larger amount of virtual
41 memory in turn reducing the number of TLB misses. With
42 virtualization and nested pagetables the TLB can be mapped of
43 larger size only if both KVM and the Linux guest are using
44 hugepages but a significant speedup already happens if only one of
45 the two is using hugepages just because of the fact the TLB miss is
48 Modern kernels support "multi-size THP" (mTHP), which introduces the
49 ability to allocate memory in blocks that are bigger than a base page
50 but smaller than traditional PMD-size (as described above), in
51 increments of a power-of-2 number of pages. mTHP can back anonymous
52 memory (for example 16K, 32K, 64K, etc). These THPs continue to be
53 PTE-mapped, but in many cases can still provide similar benefits to
54 those outlined above: Page faults are significantly reduced (by a
55 factor of e.g. 4, 8, 16, etc), but latency spikes are much less
56 prominent because the size of each page isn't as huge as the PMD-sized
57 variant and there is less memory to clear in each page fault. Some
58 architectures also employ TLB compression mechanisms to squeeze more
59 entries in when a set of PTEs are virtually and physically contiguous
60 and approporiately aligned. In this case, TLB misses will occur less
63 THP can be enabled system wide or restricted to certain tasks or even
64 memory ranges inside task's address space. Unless THP is completely
65 disabled, there is ``khugepaged`` daemon that scans memory and
66 collapses sequences of basic pages into PMD-sized huge pages.
68 The THP behaviour is controlled via :ref:`sysfs <thp_sysfs>`
69 interface and using madvise(2) and prctl(2) system calls.
71 Transparent Hugepage Support maximizes the usefulness of free memory
72 if compared to the reservation approach of hugetlbfs by allowing all
73 unused memory to be used as cache or other movable (or even unmovable
74 entities). It doesn't require reservation to prevent hugepage
75 allocation failures to be noticeable from userland. It allows paging
76 and all other advanced VM features to be available on the
77 hugepages. It requires no modifications for applications to take
80 Applications however can be further optimized to take advantage of
81 this feature, like for example they've been optimized before to avoid
82 a flood of mmap system calls for every malloc(4k). Optimizing userland
83 is by far not mandatory and khugepaged already can take care of long
84 lived page allocations even for hugepage unaware applications that
85 deals with large amounts of memory.
87 In certain cases when hugepages are enabled system wide, application
88 may end up allocating more memory resources. An application may mmap a
89 large region but only touch 1 byte of it, in that case a 2M page might
90 be allocated instead of a 4k page for no good. This is why it's
91 possible to disable hugepages system-wide and to only have them inside
92 MADV_HUGEPAGE madvise regions.
94 Embedded systems should enable hugepages only inside madvise regions
95 to eliminate any risk of wasting any precious byte of memory and to
98 Applications that gets a lot of benefit from hugepages and that don't
99 risk to lose memory by using hugepages, should use
100 madvise(MADV_HUGEPAGE) on their critical mmapped regions.
110 Transparent Hugepage Support for anonymous memory can be entirely disabled
111 (mostly for debugging purposes) or only enabled inside MADV_HUGEPAGE
112 regions (to avoid the risk of consuming more memory resources) or enabled
113 system wide. This can be achieved per-supported-THP-size with one of::
115 echo always >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled
116 echo madvise >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled
117 echo never >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled
119 where <size> is the hugepage size being addressed, the available sizes
120 for which vary by system.
124 echo always >/sys/kernel/mm/transparent_hugepage/hugepages-2048kB/enabled
126 Alternatively it is possible to specify that a given hugepage size
127 will inherit the top-level "enabled" value::
129 echo inherit >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled
133 echo inherit >/sys/kernel/mm/transparent_hugepage/hugepages-2048kB/enabled
135 The top-level setting (for use with "inherit") can be set by issuing
136 one of the following commands::
138 echo always >/sys/kernel/mm/transparent_hugepage/enabled
139 echo madvise >/sys/kernel/mm/transparent_hugepage/enabled
140 echo never >/sys/kernel/mm/transparent_hugepage/enabled
142 By default, PMD-sized hugepages have enabled="inherit" and all other
143 hugepage sizes have enabled="never". If enabling multiple hugepage
144 sizes, the kernel will select the most appropriate enabled size for a
147 It's also possible to limit defrag efforts in the VM to generate
148 anonymous hugepages in case they're not immediately free to madvise
149 regions or to never try to defrag memory and simply fallback to regular
150 pages unless hugepages are immediately available. Clearly if we spend CPU
151 time to defrag memory, we would expect to gain even more by the fact we
152 use hugepages later instead of regular pages. This isn't always
153 guaranteed, but it may be more likely in case the allocation is for a
154 MADV_HUGEPAGE region.
158 echo always >/sys/kernel/mm/transparent_hugepage/defrag
159 echo defer >/sys/kernel/mm/transparent_hugepage/defrag
160 echo defer+madvise >/sys/kernel/mm/transparent_hugepage/defrag
161 echo madvise >/sys/kernel/mm/transparent_hugepage/defrag
162 echo never >/sys/kernel/mm/transparent_hugepage/defrag
165 means that an application requesting THP will stall on
166 allocation failure and directly reclaim pages and compact
167 memory in an effort to allocate a THP immediately. This may be
168 desirable for virtual machines that benefit heavily from THP
169 use and are willing to delay the VM start to utilise them.
172 means that an application will wake kswapd in the background
173 to reclaim pages and wake kcompactd to compact memory so that
174 THP is available in the near future. It's the responsibility
175 of khugepaged to then install the THP pages later.
178 will enter direct reclaim and compaction like ``always``, but
179 only for regions that have used madvise(MADV_HUGEPAGE); all
180 other regions will wake kswapd in the background to reclaim
181 pages and wake kcompactd to compact memory so that THP is
182 available in the near future.
185 will enter direct reclaim like ``always`` but only for regions
186 that are have used madvise(MADV_HUGEPAGE). This is the default
190 should be self-explanatory.
192 By default kernel tries to use huge, PMD-mappable zero page on read
193 page fault to anonymous mapping. It's possible to disable huge zero
194 page by writing 0 or enable it back by writing 1::
196 echo 0 >/sys/kernel/mm/transparent_hugepage/use_zero_page
197 echo 1 >/sys/kernel/mm/transparent_hugepage/use_zero_page
199 Some userspace (such as a test program, or an optimized memory
200 allocation library) may want to know the size (in bytes) of a
201 PMD-mappable transparent hugepage::
203 cat /sys/kernel/mm/transparent_hugepage/hpage_pmd_size
205 All THPs at fault and collapse time will be added to _deferred_list,
206 and will therefore be split under memory presure if they are considered
207 "underused". A THP is underused if the number of zero-filled pages in
208 the THP is above max_ptes_none (see below). It is possible to disable
209 this behaviour by writing 0 to shrink_underused, and enable it by writing
212 echo 0 > /sys/kernel/mm/transparent_hugepage/shrink_underused
213 echo 1 > /sys/kernel/mm/transparent_hugepage/shrink_underused
215 khugepaged will be automatically started when PMD-sized THP is enabled
216 (either of the per-size anon control or the top-level control are set
217 to "always" or "madvise"), and it'll be automatically shutdown when
218 PMD-sized THP is disabled (when both the per-size anon control and the
219 top-level control are "never")
225 khugepaged currently only searches for opportunities to collapse to
226 PMD-sized THP and no attempt is made to collapse to other THP
229 khugepaged runs usually at low frequency so while one may not want to
230 invoke defrag algorithms synchronously during the page faults, it
231 should be worth invoking defrag at least in khugepaged. However it's
232 also possible to disable defrag in khugepaged by writing 0 or enable
233 defrag in khugepaged by writing 1::
235 echo 0 >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag
236 echo 1 >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag
238 You can also control how many pages khugepaged should scan at each
241 /sys/kernel/mm/transparent_hugepage/khugepaged/pages_to_scan
243 and how many milliseconds to wait in khugepaged between each pass (you
244 can set this to 0 to run khugepaged at 100% utilization of one core)::
246 /sys/kernel/mm/transparent_hugepage/khugepaged/scan_sleep_millisecs
248 and how many milliseconds to wait in khugepaged if there's an hugepage
249 allocation failure to throttle the next allocation attempt::
251 /sys/kernel/mm/transparent_hugepage/khugepaged/alloc_sleep_millisecs
253 The khugepaged progress can be seen in the number of pages collapsed (note
254 that this counter may not be an exact count of the number of pages
255 collapsed, since "collapsed" could mean multiple things: (1) A PTE mapping
256 being replaced by a PMD mapping, or (2) All 4K physical pages replaced by
257 one 2M hugepage. Each may happen independently, or together, depending on
258 the type of memory and the failures that occur. As such, this value should
259 be interpreted roughly as a sign of progress, and counters in /proc/vmstat
260 consulted for more accurate accounting)::
262 /sys/kernel/mm/transparent_hugepage/khugepaged/pages_collapsed
266 /sys/kernel/mm/transparent_hugepage/khugepaged/full_scans
268 ``max_ptes_none`` specifies how many extra small pages (that are
269 not already mapped) can be allocated when collapsing a group
270 of small pages into one large page::
272 /sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_none
274 A higher value leads to use additional memory for programs.
275 A lower value leads to gain less thp performance. Value of
276 max_ptes_none can waste cpu time very little, you can
279 ``max_ptes_swap`` specifies how many pages can be brought in from
280 swap when collapsing a group of pages into a transparent huge page::
282 /sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_swap
284 A higher value can cause excessive swap IO and waste
285 memory. A lower value can prevent THPs from being
286 collapsed, resulting fewer pages being collapsed into
287 THPs, and lower memory access performance.
289 ``max_ptes_shared`` specifies how many pages can be shared across multiple
290 processes. khugepaged might treat pages of THPs as shared if any page of
291 that THP is shared. Exceeding the number would block the collapse::
293 /sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_shared
295 A higher value may increase memory footprint for some workloads.
300 You can change the sysfs boot time default for the top-level "enabled"
301 control by passing the parameter ``transparent_hugepage=always`` or
302 ``transparent_hugepage=madvise`` or ``transparent_hugepage=never`` to the
305 Alternatively, each supported anonymous THP size can be controlled by
306 passing ``thp_anon=<size>[KMG],<size>[KMG]:<state>;<size>[KMG]-<size>[KMG]:<state>``,
307 where ``<size>`` is the THP size (must be a power of 2 of PAGE_SIZE and
308 supported anonymous THP) and ``<state>`` is one of ``always``, ``madvise``,
309 ``never`` or ``inherit``.
311 For example, the following will set 16K, 32K, 64K THP to ``always``,
312 set 128K, 512K to ``inherit``, set 256K to ``madvise`` and 1M, 2M
315 thp_anon=16K-64K:always;128K,512K:inherit;256K:madvise;1M-2M:never
317 ``thp_anon=`` may be specified multiple times to configure all THP sizes as
318 required. If ``thp_anon=`` is specified at least once, any anon THP sizes
319 not explicitly configured on the command line are implicitly set to
322 ``transparent_hugepage`` setting only affects the global toggle. If
323 ``thp_anon`` is not specified, PMD_ORDER THP will default to ``inherit``.
324 However, if a valid ``thp_anon`` setting is provided by the user, the
325 PMD_ORDER THP policy will be overridden. If the policy for PMD_ORDER
326 is not defined within a valid ``thp_anon``, its policy will default to
329 Similarly to ``transparent_hugepage``, you can control the hugepage
330 allocation policy for the internal shmem mount by using the kernel parameter
331 ``transparent_hugepage_shmem=<policy>``, where ``<policy>`` is one of the
332 seven valid policies for shmem (``always``, ``within_size``, ``advise``,
333 ``never``, ``deny``, and ``force``).
335 In the same manner as ``thp_anon`` controls each supported anonymous THP
336 size, ``thp_shmem`` controls each supported shmem THP size. ``thp_shmem``
337 has the same format as ``thp_anon``, but also supports the policy
340 ``thp_shmem=`` may be specified multiple times to configure all THP sizes
341 as required. If ``thp_shmem=`` is specified at least once, any shmem THP
342 sizes not explicitly configured on the command line are implicitly set to
345 ``transparent_hugepage_shmem`` setting only affects the global toggle. If
346 ``thp_shmem`` is not specified, PMD_ORDER hugepage will default to
347 ``inherit``. However, if a valid ``thp_shmem`` setting is provided by the
348 user, the PMD_ORDER hugepage policy will be overridden. If the policy for
349 PMD_ORDER is not defined within a valid ``thp_shmem``, its policy will
350 default to ``never``.
352 Hugepages in tmpfs/shmem
353 ========================
355 You can control hugepage allocation policy in tmpfs with mount option
356 ``huge=``. It can have following values:
359 Attempt to allocate huge pages every time we need a new page;
362 Do not allocate huge pages;
365 Only allocate huge page if it will be fully within i_size.
366 Also respect fadvise()/madvise() hints;
369 Only allocate huge pages if requested with fadvise()/madvise();
371 The default policy is ``never``.
373 ``mount -o remount,huge= /mountpoint`` works fine after mount: remounting
374 ``huge=never`` will not attempt to break up huge pages at all, just stop more
375 from being allocated.
377 There's also sysfs knob to control hugepage allocation policy for internal
378 shmem mount: /sys/kernel/mm/transparent_hugepage/shmem_enabled. The mount
379 is used for SysV SHM, memfds, shared anonymous mmaps (of /dev/zero or
380 MAP_ANONYMOUS), GPU drivers' DRM objects, Ashmem.
382 In addition to policies listed above, shmem_enabled allows two further
386 For use in emergencies, to force the huge option off from
389 Force the huge option on for all - very useful for testing;
391 Shmem can also use "multi-size THP" (mTHP) by adding a new sysfs knob to
392 control mTHP allocation:
393 '/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/shmem_enabled',
394 and its value for each mTHP is essentially consistent with the global
395 setting. An 'inherit' option is added to ensure compatibility with these
396 global settings. Conversely, the options 'force' and 'deny' are dropped,
397 which are rather testing artifacts from the old ages.
400 Attempt to allocate <size> huge pages every time we need a new page;
403 Inherit the top-level "shmem_enabled" value. By default, PMD-sized hugepages
404 have enabled="inherit" and all other hugepage sizes have enabled="never";
407 Do not allocate <size> huge pages;
410 Only allocate <size> huge page if it will be fully within i_size.
411 Also respect fadvise()/madvise() hints;
414 Only allocate <size> huge pages if requested with fadvise()/madvise();
416 Need of application restart
417 ===========================
419 The transparent_hugepage/enabled and
420 transparent_hugepage/hugepages-<size>kB/enabled values and tmpfs mount
421 option only affect future behavior. So to make them effective you need
422 to restart any application that could have been using hugepages. This
423 also applies to the regions registered in khugepaged.
428 The number of PMD-sized anonymous transparent huge pages currently used by the
429 system is available by reading the AnonHugePages field in ``/proc/meminfo``.
430 To identify what applications are using PMD-sized anonymous transparent huge
431 pages, it is necessary to read ``/proc/PID/smaps`` and count the AnonHugePages
432 fields for each mapping. (Note that AnonHugePages only applies to traditional
433 PMD-sized THP for historical reasons and should have been called
436 The number of file transparent huge pages mapped to userspace is available
437 by reading ShmemPmdMapped and ShmemHugePages fields in ``/proc/meminfo``.
438 To identify what applications are mapping file transparent huge pages, it
439 is necessary to read ``/proc/PID/smaps`` and count the FileHugeMapped fields
442 Note that reading the smaps file is expensive and reading it
443 frequently will incur overhead.
445 There are a number of counters in ``/proc/vmstat`` that may be used to
446 monitor how successfully the system is providing huge pages for use.
449 is incremented every time a huge page is successfully
450 allocated and charged to handle a page fault.
453 is incremented by khugepaged when it has found
454 a range of pages to collapse into one huge page and has
455 successfully allocated a new huge page to store the data.
458 is incremented if a page fault fails to allocate or charge
459 a huge page and instead falls back to using small pages.
461 thp_fault_fallback_charge
462 is incremented if a page fault fails to charge a huge page and
463 instead falls back to using small pages even though the
464 allocation was successful.
466 thp_collapse_alloc_failed
467 is incremented if khugepaged found a range
468 of pages that should be collapsed into one huge page but failed
472 is incremented every time a shmem huge page is successfully
473 allocated (Note that despite being named after "file", the counter
474 measures only shmem).
477 is incremented if a shmem huge page is attempted to be allocated
478 but fails and instead falls back to using small pages. (Note that
479 despite being named after "file", the counter measures only shmem).
481 thp_file_fallback_charge
482 is incremented if a shmem huge page cannot be charged and instead
483 falls back to using small pages even though the allocation was
484 successful. (Note that despite being named after "file", the
485 counter measures only shmem).
488 is incremented every time a file or shmem huge page is mapped into
492 is incremented every time a huge page is split into base
493 pages. This can happen for a variety of reasons but a common
494 reason is that a huge page is old and is being reclaimed.
495 This action implies splitting all PMD the page mapped with.
497 thp_split_page_failed
498 is incremented if kernel fails to split huge
499 page. This can happen if the page was pinned by somebody.
501 thp_deferred_split_page
502 is incremented when a huge page is put onto split
503 queue. This happens when a huge page is partially unmapped and
504 splitting it would free up some memory. Pages on split queue are
505 going to be split under memory pressure.
507 thp_underused_split_page
508 is incremented when a huge page on the split queue was split
509 because it was underused. A THP is underused if the number of
510 zero pages in the THP is above a certain threshold
511 (/sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_none).
514 is incremented every time a PMD split into table of PTEs.
515 This can happen, for instance, when application calls mprotect() or
516 munmap() on part of huge page. It doesn't split huge page, only
520 is incremented every time a huge zero page used for thp is
521 successfully allocated. Note, it doesn't count every map of
522 the huge zero page, only its allocation.
524 thp_zero_page_alloc_failed
525 is incremented if kernel fails to allocate
526 huge zero page and falls back to using small pages.
529 is incremented every time a huge page is swapout in one
530 piece without splitting.
533 is incremented if a huge page has to be split before swapout.
534 Usually because failed to allocate some continuous swap space
537 In /sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/stats, There are
538 also individual counters for each huge page size, which can be utilized to
539 monitor the system's effectiveness in providing huge pages for usage. Each
540 counter has its own corresponding file.
543 is incremented every time a huge page is successfully
544 allocated and charged to handle a page fault.
547 is incremented if a page fault fails to allocate or charge
548 a huge page and instead falls back to using huge pages with
549 lower orders or small pages.
551 anon_fault_fallback_charge
552 is incremented if a page fault fails to charge a huge page and
553 instead falls back to using huge pages with lower orders or
554 small pages even though the allocation was successful.
557 is incremented every time a huge page is swapped out to zswap in one
558 piece without splitting.
561 is incremented every time a huge page is swapped in from a non-zswap
562 swap device in one piece.
565 is incremented every time a huge page is swapped out to a non-zswap
566 swap device in one piece without splitting.
569 is incremented if a huge page has to be split before swapout.
570 Usually because failed to allocate some continuous swap space
574 is incremented every time a shmem huge page is successfully
578 is incremented if a shmem huge page is attempted to be allocated
579 but fails and instead falls back to using small pages.
581 shmem_fallback_charge
582 is incremented if a shmem huge page cannot be charged and instead
583 falls back to using small pages even though the allocation was
587 is incremented every time a huge page is successfully split into
588 smaller orders. This can happen for a variety of reasons but a
589 common reason is that a huge page is old and is being reclaimed.
592 is incremented if kernel fails to split huge
593 page. This can happen if the page was pinned by somebody.
596 is incremented when a huge page is put onto split queue.
597 This happens when a huge page is partially unmapped and splitting
598 it would free up some memory. Pages on split queue are going to
599 be split under memory pressure, if splitting is possible.
602 the number of anonymous THP we have in the whole system. These THPs
603 might be currently entirely mapped or have partially unmapped/unused
606 nr_anon_partially_mapped
607 the number of anonymous THP which are likely partially mapped, possibly
608 wasting memory, and have been queued for deferred memory reclamation.
609 Note that in corner some cases (e.g., failed migration), we might detect
610 an anonymous THP as "partially mapped" and count it here, even though it
611 is not actually partially mapped anymore.
613 As the system ages, allocating huge pages may be expensive as the
614 system uses memory compaction to copy data around memory to free a
615 huge page for use. There are some counters in ``/proc/vmstat`` to help
616 monitor this overhead.
619 is incremented every time a process stalls to run
620 memory compaction so that a huge page is free for use.
623 is incremented if the system compacted memory and
624 freed a huge page for use.
627 is incremented if the system tries to compact memory
630 It is possible to establish how long the stalls were using the function
631 tracer to record how long was spent in __alloc_pages() and
632 using the mm_page_alloc tracepoint to identify which allocations were
635 Optimizing the applications
636 ===========================
638 To be guaranteed that the kernel will map a THP immediately in any
639 memory region, the mmap region has to be hugepage naturally
640 aligned. posix_memalign() can provide that guarantee.
645 You can use hugetlbfs on a kernel that has transparent hugepage
646 support enabled just fine as always. No difference can be noted in
647 hugetlbfs other than there will be less overall fragmentation. All
648 usual features belonging to hugetlbfs are preserved and
649 unaffected. libhugetlbfs will also work fine as usual.