x86/mm/ptdump: Optimize check for W+X mappings for CONFIG_KASAN=y
commit243b72aae28ca1032284028323bb81c9235b15c9
authorAndrey Ryabinin <aryabinin@virtuozzo.com>
Tue, 14 Feb 2017 10:08:38 +0000 (14 13:08 +0300)
committerThomas Gleixner <tglx@linutronix.de>
Thu, 16 Feb 2017 18:53:25 +0000 (16 19:53 +0100)
tree5d35c60ad4068a0f691bd6f9175e008a2ee946f1
parent5b1ad68f9b1951ef78312d2935906cc8a8bd2e12
x86/mm/ptdump: Optimize check for W+X mappings for CONFIG_KASAN=y

Enabling both DEBUG_WX=y and KASAN=y options significantly increases
boot time (dozens of seconds at least).
KASAN fills kernel page tables with repeated values to map several
TBs of the virtual memory to the single kasan_zero_page:

    kasan_zero_pud ->
        kasan_zero_pmd->
            kasan_zero_pte->
                kasan_zero_page

So, the page table walker used to find W+X mapping check the same
kasan_zero_p?d page table entries a lot more than once.
With patch pud walker will skip the pud if it has the same value as
the previous one . Skipping done iff we search for W+X mappings,
so this optimization won't affect the page table dump via debugfs.

This dropped time spend in W+X check from ~30 sec to reasonable 0.1 sec:

Before:
[    4.579991] Freeing unused kernel memory: 1000K
[   35.257523] x86/mm: Checked W+X mappings: passed, no W+X pages found.

After:
[    5.138756] Freeing unused kernel memory: 1000K
[    5.266496] x86/mm: Checked W+X mappings: passed, no W+X pages found.

Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Reviewed-by: Alexander Potapenko <glider@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: kasan-dev@googlegroups.com
Cc: Tobias Regnery <tobias.regnery@gmail.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Link: http://lkml.kernel.org/r/20170214100839.17186-1-aryabinin@virtuozzo.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
arch/x86/mm/dump_pagetables.c