4 David S. Miller <davem@redhat.com>
6 This document describes the cache/tlb flushing interfaces called
7 by the Linux VM subsystem. It enumerates over each interface,
8 describes it's intended purpose, and what side effect is expected
9 after the interface is invoked.
11 The side effects described below are stated for a uniprocessor
12 implementation, and what is to happen on that single processor. The
13 SMP cases are a simple extension, in that you just extend the
14 definition such that the side effect for a particular interface occurs
15 on all processors in the system. Don't let this scare you into
16 thinking SMP cache/tlb flushing must be so inefficient, this is in
17 fact an area where many optimizations are possible. For example,
18 if it can be proven that a user address space has never executed
19 on a cpu (see vma->cpu_vm_mask), one need not perform a flush
20 for this address space on that cpu.
22 First, the TLB flushing interfaces, since they are the simplest. The
23 "TLB" is abstracted under Linux as something the cpu uses to cache
24 virtual-->physical address translations obtained from the software
25 page tables. Meaning that if the software page tables change, it is
26 possible for stale translations to exist in this "TLB" cache.
27 Therefore when software page table changes occur, the kernel will
28 invoke one of the following flush methods _after_ the page table
31 1) void flush_tlb_all(void)
33 The most severe flush of all. After this interface runs,
34 any previous page table modification whatsoever will be
37 This is usually invoked when the kernel page tables are
38 changed, since such translations are "global" in nature.
40 2) void flush_tlb_mm(struct mm_struct *mm)
42 This interface flushes an entire user address space from
43 the TLB. After running, this interface must make sure that
44 any previous page table modifications for the address space
45 'mm' will be visible to the cpu. That is, after running,
46 there will be no entries in the TLB for 'mm'.
48 This interface is used to handle whole address space
49 page table operations such as what happens during
52 3) void flush_tlb_range(struct mm_struct *mm,
53 unsigned long start, unsigned long end)
55 Here we are flushing a specific range of (user) virtual
56 address translations from the TLB. After running, this
57 interface must make sure that any previous page table
58 modifications for the address space 'mm' in the range 'start'
59 to 'end' will be visible to the cpu. That is, after running,
60 there will be no entries in the TLB for 'mm' for virtual
61 addresses in the range 'start' to 'end'.
63 Primarily, this is used for munmap() type operations.
65 The interface is provided in hopes that the port can find
66 a suitably efficient method for removing multiple page
67 sized translations from the TLB, instead of having the kernel
68 call flush_tlb_page (see below) for each entry which may be
71 4) void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
73 This time we need to remove the PAGE_SIZE sized translation
74 from the TLB. The 'vma' is the backing structure used by
75 Linux to keep track of mmap'd regions for a process, the
76 address space is available via vma->vm_mm. Also, one may
77 test (vma->vm_flags & VM_EXEC) to see if this region is
78 executable (and thus could be in the 'instruction TLB' in
79 split-tlb type setups).
81 After running, this interface must make sure that any previous
82 page table modification for address space 'vma->vm_mm' for
83 user virtual address 'page' will be visible to the cpu. That
84 is, after running, there will be no entries in the TLB for
85 'vma->vm_mm' for virtual address 'page'.
87 This is used primarily during fault processing.
89 5) void flush_tlb_pgtables(struct mm_struct *mm,
90 unsigned long start, unsigned long end)
92 The software page tables for address space 'mm' for virtual
93 addresses in the range 'start' to 'end' are being torn down.
95 Some platforms cache the lowest level of the software page tables
96 in a linear virtually mapped array, to make TLB miss processing
97 more efficient. On such platforms, since the TLB is caching the
98 software page table structure, it needs to be flushed when parts
99 of the software page table tree are unlinked/freed.
101 Sparc64 is one example of a platform which does this.
103 Usually, when munmap()'ing an area of user virtual address
104 space, the kernel leaves the page table parts around and just
105 marks the individual pte's as invalid. However, if very large
106 portions of the address space are unmapped, the kernel frees up
107 those portions of the software page tables to prevent potential
108 excessive kernel memory usage caused by erratic mmap/mmunmap
109 sequences. It is at these times that flush_tlb_pgtables will
112 6) void update_mmu_cache(struct vm_area_struct *vma,
113 unsigned long address, pte_t pte)
115 At the end of every page fault, this routine is invoked to
116 tell the architecture specific code that a translation
117 described by "pte" now exists at virtual address "address"
118 for address space "vma->vm_mm", in the software page tables.
120 A port may use this information in any way it so chooses.
121 For example, it could use this event to pre-load TLB
122 translations for software managed TLB configurations.
123 The sparc64 port currently does this.
125 Next, we have the cache flushing interfaces. In general, when Linux
126 is changing an existing virtual-->physical mapping to a new value,
127 the sequence will be in one of the following forms:
129 1) flush_cache_mm(mm);
130 change_all_page_tables_of(mm);
133 2) flush_cache_range(mm, start, end);
134 change_range_of_page_tables(mm, start, end);
135 flush_tlb_range(mm, start, end);
137 3) flush_cache_page(vma, page);
138 set_pte(pte_pointer, new_pte_val);
139 flush_tlb_page(vma, page);
141 The cache level flush will always be first, because this allows
142 us to properly handle systems whose caches are strict and require
143 a virtual-->physical translation to exist for a virtual address
144 when that virtual address is flushed from the cache. The HyperSparc
145 cpu is one such cpu with this attribute.
147 The cache flushing routines below need only deal with cache flushing
148 to the extent that it is necessary for a particular cpu. Mostly,
149 these routines must be implemented for cpus which have virtually
150 indexed caches which must be flushed when virtual-->physical
151 translations are changed or removed. So, for example, the physically
152 indexed physically tagged caches of IA32 processors have no need to
153 implement these interfaces since the caches are fully synchronized
154 and have no dependency on translation information.
156 Here are the routines, one by one:
158 1) void flush_cache_all(void)
160 The most severe flush of all. After this interface runs,
161 the entire cpu cache is flushed.
163 This is usually invoked when the kernel page tables are
164 changed, since such translations are "global" in nature.
166 2) void flush_cache_mm(struct mm_struct *mm)
168 This interface flushes an entire user address space from
169 the caches. That is, after running, there will be no cache
170 lines assosciated with 'mm'.
172 This interface is used to handle whole address space
173 page table operations such as what happens during
174 fork, exit, and exec.
176 3) void flush_cache_range(struct mm_struct *mm,
177 unsigned long start, unsigned long end)
179 Here we are flushing a specific range of (user) virtual
180 addresses from the cache. After running, there will be no
181 entries in the cache for 'mm' for virtual addresses in the
182 range 'start' to 'end'.
184 Primarily, this is used for munmap() type operations.
186 The interface is provided in hopes that the port can find
187 a suitably efficient method for removing multiple page
188 sized regions from the cache, instead of having the kernel
189 call flush_cache_page (see below) for each entry which may be
192 4) void flush_cache_page(struct vm_area_struct *vma, unsigned long page)
194 This time we need to remove a PAGE_SIZE sized range
195 from the cache. The 'vma' is the backing structure used by
196 Linux to keep track of mmap'd regions for a process, the
197 address space is available via vma->vm_mm. Also, one may
198 test (vma->vm_flags & VM_EXEC) to see if this region is
199 executable (and thus could be in the 'instruction cache' in
200 "Harvard" type cache layouts).
202 After running, there will be no entries in the cache for
203 'vma->vm_mm' for virtual address 'page'.
205 This is used primarily during fault processing.
207 There exists another whole class of cpu cache issues which currently
208 require a whole different set of interfaces to handle properly.
209 The biggest problem is that of virtual aliasing in the data cache
212 Is your port subsceptible to virtual aliasing in it's D-cache?
213 Well, if your D-cache is virtually indexed, is larger in size than
214 PAGE_SIZE, and does not prevent multiple cache lines for the same
215 physical address from existing at once, you have this problem.
217 If your D-cache has this problem, first define asm/shmparam.h SHMLBA
218 properly, it should essentially be the size of your virtually
219 addressed D-cache (or if the size is variable, the largest possible
220 size). This setting will force the SYSv IPC layer to only allow user
221 processes to mmap shared memory at address which are a multiple of
224 Next, you have two methods to solve the D-cache aliasing issue for all
225 other cases. Please keep in mind that fact that, for a given page
226 mapped into some user address space, there is always at least one more
227 mapping, that of the kernel in it's linear mapping starting at
228 PAGE_OFFSET. So immediately, once the first user maps a given
229 physical page into it's address space, by implication the D-cache
230 aliasing problem has the potential to exist since the kernel already
231 maps this page at it's virtual address.
233 First, I describe the old method to deal with this problem. I am
234 describing it for documentation purposes, but it is deprecated and the
235 latter method I describe next should be used by all new ports and all
236 existing ports should move over to the new mechanism as well.
238 flush_page_to_ram(struct page *page)
240 The physical page 'page' is about to be place into the
241 user address space of a process. If it is possible for
242 stores done recently by the kernel into this physical
243 page, to not be visible to an arbitray mapping in userspace,
244 you must flush this page from the D-cache.
246 If the D-cache is writeback in nature, the dirty data (if
247 any) for this physical page must be written back to main
248 memory before the cache lines are invalidated.
250 Admittedly, the author did not think very much when designing this
251 interface. It does not give the architecture enough information about
252 what exactly is going on, and there is not context with which to base
253 any judgment about whether an alias is possible at all. The new
254 interfaces to deal with D-cache aliasing are meant to address this by
255 telling the architecture specific code exactly which is going on at
256 the proper points in time.
258 Here is the new interface:
260 void copy_user_page(void *to, void *from, unsigned long address)
261 void clear_user_page(void *to, unsigned long address)
263 These two routines store data in user anonymous or COW
264 pages. It allows a port to efficiently avoid D-cache alias
265 issues between userspace and the kernel.
267 For example, a port may temporarily map 'from' and 'to' to
268 kernel virtual addresses during the copy. The virtual address
269 for these two pages is choosen in such a way that the kernel
270 load/store instructions happen to virtual addresses which are
271 of the same "color" as the user mapping of the page. Sparc64
272 for example, uses this technique.
274 The "address" parameter tells the virtual address where the
275 user will ultimately this page mapped.
277 If D-cache aliasing is not an issue, these two routines may
278 simply call memcpy/memset directly and do nothing more.
280 void flush_dcache_page(struct page *page)
282 Any time the kernel writes to a page cache page, _OR_
283 the kernel is about to read from a page cache page and
284 user space shared/writable mappings of this page potentially
285 exist, this routine is called.
287 NOTE: This routine need only be called for page cache pages
288 which can potentially ever be mapped into the address
289 space of a user process. So for example, VFS layer code
290 handling vfs symlinks in the page cache need not call
291 this interface at all.
293 The phrase "kernel writes to a page cache page" means,
294 specifically, that the kernel executes store instructions
295 that dirty data in that page at the page->virtual mapping
296 of that page. It is important to flush here to handle
297 D-cache aliasing, to make sure these kernel stores are
298 visible to user space mappings of that page.
300 The corollary case is just as important, if there are users
301 which have shared+writable mappings of this file, we must make
302 sure that kernel reads of these pages will see the most recent
303 stores done by the user.
305 If D-cache aliasing is not an issue, this routine may
306 simply be defined as a nop on that architecture.
308 There is a bit set aside in page->flags (PG_arch_1) as
309 "architecture private". The kernel guarentees that,
310 for pagecache pages, it will clear this bit when such
311 a page first enters the pagecache.
313 This allows these interfaces to be implemented much more
314 efficiently. It allows one to "defer" (perhaps indefinitely)
315 the actual flush if there are currently no user processes
316 mapping this page. See sparc64's flush_dcache_page and
317 update_mmu_cache implementations for an example of how to go
320 The idea is, first at flush_dcache_page() time, if
321 page->mapping->i_mmap{,_shared} are empty lists, just mark the
322 architecture private page flag bit. Later, in
323 update_mmu_cache(), a check is made of this flag bit, and if
324 set the flush is done and the flag bit is cleared.
326 XXX Not documented: flush_icache_page(), need to talk to Paul
327 Mackerras, David Mosberger-Tang, et al.
328 to see what the expected semantics of this
329 interface are. -DaveM