1 .. SPDX-License-Identifier: GPL-2.0
10 The acquisition orders for mutexes are as follows:
12 - kvm->lock is taken outside vcpu->mutex
14 - kvm->lock is taken outside kvm->slots_lock and kvm->irq_lock
16 - kvm->slots_lock is taken outside kvm->irq_lock, though acquiring
17 them together is quite rare.
19 On x86, vcpu->mutex is taken outside kvm->arch.hyperv.hv_lock.
21 Everything else is a leaf: no other lock is taken inside the critical
29 Fast page fault is the fast path which fixes the guest page fault out of
30 the mmu-lock on x86. Currently, the page fault can be fast in one of the
33 1. Access Tracking: The SPTE is not present, but it is marked for access
34 tracking i.e. the SPTE_SPECIAL_MASK is set. That means we need to
35 restore the saved R/X bits. This is described in more detail later below.
37 2. Write-Protection: The SPTE is present and the fault is
38 caused by write-protect. That means we just need to change the W bit of
41 What we use to avoid all the race is the SPTE_HOST_WRITEABLE bit and
42 SPTE_MMU_WRITEABLE bit on the spte:
44 - SPTE_HOST_WRITEABLE means the gfn is writable on host.
45 - SPTE_MMU_WRITEABLE means the gfn is writable on mmu. The bit is set when
46 the gfn is writable on guest mmu and it is not write-protected by shadow
47 page write-protection.
49 On fast page fault path, we will use cmpxchg to atomically set the spte W
50 bit if spte.SPTE_HOST_WRITEABLE = 1 and spte.SPTE_WRITE_PROTECT = 1, or
51 restore the saved R/X bits if VMX_EPT_TRACK_ACCESS mask is set, or both. This
52 is safe because whenever changing these bits can be detected by cmpxchg.
54 But we need carefully check these cases:
56 1) The mapping from gfn to pfn
58 The mapping from gfn to pfn may be changed since we can only ensure the pfn
59 is not changed during cmpxchg. This is a ABA problem, for example, below case
62 +------------------------------------------------------------------------+
63 | At the beginning:: |
66 | gfn1 is mapped to pfn1 on host |
67 | spte is the shadow page table entry corresponding with gpte and |
69 +------------------------------------------------------------------------+
70 | On fast page fault path: |
71 +------------------------------------+-----------------------------------+
73 +------------------------------------+-----------------------------------+
76 | old_spte = *spte; | |
77 +------------------------------------+-----------------------------------+
78 | | pfn1 is swapped out:: |
82 | | pfn1 is re-alloced for gfn2. |
84 | | gpte is changed to point to |
85 | | gfn2 by the guest:: |
88 +------------------------------------+-----------------------------------+
91 | if (cmpxchg(spte, old_spte, old_spte+W) |
92 | mark_page_dirty(vcpu->kvm, gfn1) |
94 +------------------------------------------------------------------------+
96 We dirty-log for gfn1, that means gfn2 is lost in dirty-bitmap.
98 For direct sp, we can easily avoid it since the spte of direct sp is fixed
99 to gfn. For indirect sp, we disabled fast page fault for simplicity.
101 A solution for indirect sp could be to pin the gfn, for example via
102 kvm_vcpu_gfn_to_pfn_atomic, before the cmpxchg. After the pinning:
104 - We have held the refcount of pfn that means the pfn can not be freed and
105 be reused for another gfn.
106 - The pfn is writable and therefore it cannot be shared between different gfns
109 Then, we can ensure the dirty bitmaps is correctly set for a gfn.
111 2) Dirty bit tracking
113 In the origin code, the spte can be fast updated (non-atomically) if the
114 spte is read-only and the Accessed bit has already been set since the
115 Accessed bit and Dirty bit can not be lost.
117 But it is not true after fast page fault since the spte can be marked
118 writable between reading spte and updating spte. Like below case:
120 +------------------------------------------------------------------------+
121 | At the beginning:: |
124 | spte.Accessed = 1 |
125 +------------------------------------+-----------------------------------+
127 +------------------------------------+-----------------------------------+
128 | In mmu_spte_clear_track_bits():: | |
130 | old_spte = *spte; | |
133 | /* 'if' condition is satisfied. */| |
134 | if (old_spte.Accessed == 1 && | |
135 | old_spte.W == 0) | |
137 +------------------------------------+-----------------------------------+
138 | | on fast page fault path:: |
142 | | memory write on the spte:: |
145 +------------------------------------+-----------------------------------+
149 | old_spte = xchg(spte, 0ull) | |
150 | if (old_spte.Accessed == 1) | |
151 | kvm_set_pfn_accessed(spte.pfn);| |
152 | if (old_spte.Dirty == 1) | |
153 | kvm_set_pfn_dirty(spte.pfn); | |
155 +------------------------------------+-----------------------------------+
157 The Dirty bit is lost in this case.
159 In order to avoid this kind of issue, we always treat the spte as "volatile"
160 if it can be updated out of mmu-lock, see spte_has_volatile_bits(), it means,
161 the spte is always atomically updated in this case.
163 3) flush tlbs due to spte updated
165 If the spte is updated from writable to readonly, we should flush all TLBs,
166 otherwise rmap_write_protect will find a read-only spte, even though the
167 writable spte might be cached on a CPU's TLB.
169 As mentioned before, the spte can be updated to writable out of mmu-lock on
170 fast page fault path, in order to easily audit the path, we see if TLBs need
171 be flushed caused by this reason in mmu_spte_update() since this is a common
172 function to update spte (present -> present).
174 Since the spte is "volatile" if it can be updated out of mmu-lock, we always
175 atomically update the spte, the race caused by fast page fault can be avoided,
176 See the comments in spte_has_volatile_bits() and mmu_spte_update().
178 Lockless Access Tracking:
180 This is used for Intel CPUs that are using EPT but do not support the EPT A/D
181 bits. In this case, when the KVM MMU notifier is called to track accesses to a
182 page (via kvm_mmu_notifier_clear_flush_young), it marks the PTE as not-present
183 by clearing the RWX bits in the PTE and storing the original R & X bits in
184 some unused/ignored bits. In addition, the SPTE_SPECIAL_MASK is also set on the
185 PTE (using the ignored bit 62). When the VM tries to access the page later on,
186 a fault is generated and the fast page fault mechanism described above is used
187 to atomically restore the PTE to a Present state. The W bit is not saved when
188 the PTE is marked for access tracking and during restoration to the Present
189 state, the W bit is set depending on whether or not it was a write access. If
190 it wasn't, then the W bit will remain clear until a write access happens, at
191 which time it will be set using the Dirty tracking mechanism described above.
201 :Name: kvm_count_lock
202 :Type: raw_spinlock_t
204 :Protects: - hardware virtualization enable/disable
205 :Comment: 'raw' because hardware enabling/disabling must be atomic /wrt
208 :Name: kvm_arch::tsc_write_lock
211 :Protects: - kvm_arch::{last_tsc_write,last_tsc_nsec,last_tsc_offset}
213 :Comment: 'raw' because updating the tsc offsets must not be preempted.
218 :Protects: -shadow page/shadow tlb entry
219 :Comment: it is a spinlock since it is used in mmu notifier.
224 :Protects: - kvm->memslots
226 :Comment: The srcu read lock must be held while accessing memslots (e.g.
227 when using gfn_to_* functions) and while accessing in-kernel
228 MMIO/PIO address->device structure mapping (kvm->buses).
229 The srcu index can be stored in kvm_vcpu->srcu_idx per vcpu
230 if it is needed by multiple functions.
232 :Name: blocked_vcpu_on_cpu_lock
235 :Protects: blocked_vcpu_on_cpu
236 :Comment: This is a per-CPU lock and it is used for VT-d posted-interrupts.
237 When VT-d posted-interrupts is supported and the VM has assigned
238 devices, we put the blocked vCPU on the list blocked_vcpu_on_cpu
239 protected by blocked_vcpu_on_cpu_lock, when VT-d hardware issues
240 wakeup notification event since external interrupts from the
241 assigned devices happens, we will find the vCPU on the list to