1 MARKING SHARED-MEMORY ACCESSES
2 ==============================
4 This document provides guidelines for marking intentionally concurrent
5 normal accesses to shared memory, that is "normal" as in accesses that do
6 not use read-modify-write atomic operations. It also describes how to
7 document these accesses, both with comments and with special assertions
8 processed by the Kernel Concurrency Sanitizer (KCSAN). This discussion
9 builds on an earlier LWN article [1] and Linux Foundation mentorship
13 ACCESS-MARKING OPTIONS
14 ======================
16 The Linux kernel provides the following access-marking options:
18 1. Plain C-language accesses (unmarked), for example, "a = b;"
20 2. Data-race marking, for example, "data_race(a = b);"
22 3. READ_ONCE(), for example, "a = READ_ONCE(b);"
23 The various forms of atomic_read() also fit in here.
25 4. WRITE_ONCE(), for example, "WRITE_ONCE(a, b);"
26 The various forms of atomic_set() also fit in here.
28 5. __data_racy, for example "int __data_racy a;"
30 6. KCSAN's negative-marking assertions, ASSERT_EXCLUSIVE_ACCESS()
31 and ASSERT_EXCLUSIVE_WRITER(), are described in the
32 "ACCESS-DOCUMENTATION OPTIONS" section below.
34 These may be used in combination, as shown in this admittedly improbable
37 WRITE_ONCE(a, b + data_race(c + d) + READ_ONCE(e));
39 Neither plain C-language accesses nor data_race() (#1 and #2 above) place
40 any sort of constraint on the compiler's choice of optimizations [3].
41 In contrast, READ_ONCE() and WRITE_ONCE() (#3 and #4 above) restrict the
42 compiler's use of code-motion and common-subexpression optimizations.
43 Therefore, if a given access is involved in an intentional data race,
44 using READ_ONCE() for loads and WRITE_ONCE() for stores is usually
45 preferable to data_race(), which in turn is usually preferable to plain
46 C-language accesses. It is permissible to combine #2 and #3, for example,
47 data_race(READ_ONCE(a)), which will both restrict compiler optimizations
48 and disable KCSAN diagnostics.
50 KCSAN will complain about many types of data races involving plain
51 C-language accesses, but marking all accesses involved in a given data
52 race with one of data_race(), READ_ONCE(), or WRITE_ONCE(), will prevent
53 KCSAN from complaining. Of course, lack of KCSAN complaints does not
54 imply correct code. Therefore, please take a thoughtful approach
55 when responding to KCSAN complaints. Churning the code base with
56 ill-considered additions of data_race(), READ_ONCE(), and WRITE_ONCE()
59 In fact, the following sections describe situations where use of
60 data_race() and even plain C-language accesses is preferable to
61 READ_ONCE() and WRITE_ONCE().
64 Use of the data_race() Macro
65 ----------------------------
67 Here are some situations where data_race() should be used instead of
68 READ_ONCE() and WRITE_ONCE():
70 1. Data-racy loads from shared variables whose values are used only
71 for diagnostic purposes.
73 2. Data-racy reads whose values are checked against marked reload.
75 3. Reads whose values feed into error-tolerant heuristics.
77 4. Writes setting values that feed into error-tolerant heuristics.
80 Data-Racy Reads for Approximate Diagnostics
82 Approximate diagnostics include lockdep reports, monitoring/statistics
83 (including /proc and /sys output), WARN*()/BUG*() checks whose return
84 values are ignored, and other situations where reads from shared variables
85 are not an integral part of the core concurrency design.
87 In fact, use of data_race() instead READ_ONCE() for these diagnostic
88 reads can enable better checking of the remaining accesses implementing
89 the core concurrency design. For example, suppose that the core design
90 prevents any non-diagnostic reads from shared variable x from running
91 concurrently with updates to x. Then using plain C-language writes
92 to x allows KCSAN to detect reads from x from within regions of code
93 that fail to exclude the updates. In this case, it is important to use
94 data_race() for the diagnostic reads because otherwise KCSAN would give
95 false-positive warnings about these diagnostic reads.
97 If it is necessary to both restrict compiler optimizations and disable
98 KCSAN diagnostics, use both data_race() and READ_ONCE(), for example,
99 data_race(READ_ONCE(a)).
101 In theory, plain C-language loads can also be used for this use case.
102 However, in practice this will have the disadvantage of causing KCSAN
103 to generate false positives because KCSAN will have no way of knowing
104 that the resulting data race was intentional.
107 Data-Racy Reads That Are Checked Against Marked Reload
109 The values from some reads are not implicitly trusted. They are instead
110 fed into some operation that checks the full value against a later marked
111 load from memory, which means that the occasional arbitrarily bogus value
112 is not a problem. For example, if a bogus value is fed into cmpxchg(),
113 all that happens is that this cmpxchg() fails, which normally results
114 in a retry. Unless the race condition that resulted in the bogus value
115 recurs, this retry will with high probability succeed, so no harm done.
117 However, please keep in mind that a data_race() load feeding into
118 a cmpxchg_relaxed() might still be subject to load fusing on some
119 architectures. Therefore, it is best to capture the return value from
120 the failing cmpxchg() for the next iteration of the loop, an approach
121 that provides the compiler much less scope for mischievous optimizations.
122 Capturing the return value from cmpxchg() also saves a memory reference
125 In theory, plain C-language loads can also be used for this use case.
126 However, in practice this will have the disadvantage of causing KCSAN
127 to generate false positives because KCSAN will have no way of knowing
128 that the resulting data race was intentional.
131 Reads Feeding Into Error-Tolerant Heuristics
133 Values from some reads feed into heuristics that can tolerate occasional
134 errors. Such reads can use data_race(), thus allowing KCSAN to focus on
135 the other accesses to the relevant shared variables. But please note
136 that data_race() loads are subject to load fusing, which can result in
137 consistent errors, which in turn are quite capable of breaking heuristics.
138 Therefore use of data_race() should be limited to cases where some other
139 code (such as a barrier() call) will force the occasional reload.
141 Note that this use case requires that the heuristic be able to handle
142 any possible error. In contrast, if the heuristics might be fatally
143 confused by one or more of the possible erroneous values, use READ_ONCE()
144 instead of data_race().
146 In theory, plain C-language loads can also be used for this use case.
147 However, in practice this will have the disadvantage of causing KCSAN
148 to generate false positives because KCSAN will have no way of knowing
149 that the resulting data race was intentional.
152 Writes Setting Values Feeding Into Error-Tolerant Heuristics
154 The values read into error-tolerant heuristics come from somewhere,
155 for example, from sysfs. This means that some code in sysfs writes
156 to this same variable, and these writes can also use data_race().
157 After all, if the heuristic can tolerate the occasional bogus value
158 due to compiler-mangled reads, it can also tolerate the occasional
159 compiler-mangled write, at least assuming that the proper value is in
160 place once the write completes.
162 Plain C-language stores can also be used for this use case. However,
163 in kernels built with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n, this
164 will have the disadvantage of causing KCSAN to generate false positives
165 because KCSAN will have no way of knowing that the resulting data race
169 Use of Plain C-Language Accesses
170 --------------------------------
172 Here are some example situations where plain C-language accesses should
173 used instead of READ_ONCE(), WRITE_ONCE(), and data_race():
175 1. Accesses protected by mutual exclusion, including strict locking
176 and sequence locking.
178 2. Initialization-time and cleanup-time accesses. This covers a
179 wide variety of situations, including the uniprocessor phase of
180 system boot, variables to be used by not-yet-spawned kthreads,
181 structures not yet published to reference-counted or RCU-protected
182 data structures, and the cleanup side of any of these situations.
184 3. Per-CPU variables that are not accessed from other CPUs.
186 4. Private per-task variables, including on-stack variables, some
187 fields in the task_struct structure, and task-private heap data.
189 5. Any other loads for which there is not supposed to be a concurrent
190 store to that same variable.
192 6. Any other stores for which there should be neither concurrent
193 loads nor concurrent stores to that same variable.
195 But note that KCSAN makes two explicit exceptions to this rule
196 by default, refraining from flagging plain C-language stores:
198 a. No matter what. You can override this default by building
199 with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n.
201 b. When the store writes the value already contained in
202 that variable. You can override this default by building
203 with CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY=n.
205 c. When one of the stores is in an interrupt handler and
206 the other in the interrupted code. You can override this
207 default by building with CONFIG_KCSAN_INTERRUPT_WATCHER=y.
209 Note that it is important to use plain C-language accesses in these cases,
210 because doing otherwise prevents KCSAN from detecting violations of your
211 code's synchronization rules.
217 Adding the __data_racy type qualifier to the declaration of a variable
218 causes KCSAN to treat all accesses to that variable as if they were
219 enclosed by data_race(). However, __data_racy does not affect the
220 compiler, though one could imagine hardened kernel builds treating the
221 __data_racy type qualifier as if it was the volatile keyword.
223 Note well that __data_racy is subject to the same pointer-declaration
224 rules as are other type qualifiers such as const and volatile.
227 int __data_racy *p; // Pointer to data-racy data.
228 int *__data_racy p; // Data-racy pointer to non-data-racy data.
231 ACCESS-DOCUMENTATION OPTIONS
232 ============================
234 It is important to comment marked accesses so that people reading your
235 code, yourself included, are reminded of the synchronization design.
236 However, it is even more important to comment plain C-language accesses
237 that are intentionally involved in data races. Such comments are
238 needed to remind people reading your code, again, yourself included,
239 of how the compiler has been prevented from optimizing those accesses
240 into concurrency bugs.
242 It is also possible to tell KCSAN about your synchronization design.
243 For example, ASSERT_EXCLUSIVE_ACCESS(foo) tells KCSAN that any
244 concurrent access to variable foo by any other CPU is an error, even
245 if that concurrent access is marked with READ_ONCE(). In addition,
246 ASSERT_EXCLUSIVE_WRITER(foo) tells KCSAN that although it is OK for there
247 to be concurrent reads from foo from other CPUs, it is an error for some
248 other CPU to be concurrently writing to foo, even if that concurrent
249 write is marked with data_race() or WRITE_ONCE().
251 Note that although KCSAN will call out data races involving either
252 ASSERT_EXCLUSIVE_ACCESS() or ASSERT_EXCLUSIVE_WRITER() on the one hand
253 and data_race() writes on the other, KCSAN will not report the location
254 of these data_race() writes.
260 As noted earlier, the goal is to prevent the compiler from destroying
261 your concurrent algorithm, to help the human reader, and to inform
262 KCSAN of aspects of your concurrency design. This section looks at a
263 few examples showing how this can be done.
266 Lock Protection With Lockless Diagnostic Access
267 -----------------------------------------------
269 For example, suppose a shared variable "foo" is read only while a
270 reader-writer spinlock is read-held, written only while that same
271 spinlock is write-held, except that it is also read locklessly for
272 diagnostic purposes. The code might look as follows:
275 DEFINE_RWLOCK(foo_rwlock);
277 void update_foo(int newval)
279 write_lock(&foo_rwlock);
281 do_something(newval);
282 write_unlock(&foo_rwlock);
289 read_lock(&foo_rwlock);
292 read_unlock(&foo_rwlock);
296 void read_foo_diagnostic(void)
298 pr_info("Current value of foo: %d\n", data_race(foo));
301 The reader-writer lock prevents the compiler from introducing concurrency
302 bugs into any part of the main algorithm using foo, which means that
303 the accesses to foo within both update_foo() and read_foo() can (and
304 should) be plain C-language accesses. One benefit of making them be
305 plain C-language accesses is that KCSAN can detect any erroneous lockless
306 reads from or updates to foo. The data_race() in read_foo_diagnostic()
307 tells KCSAN that data races are expected, and should be silently
308 ignored. This data_race() also tells the human reading the code that
309 read_foo_diagnostic() might sometimes return a bogus value.
311 If it is necessary to suppress compiler optimization and also detect
312 buggy lockless writes, read_foo_diagnostic() can be updated as follows:
314 void read_foo_diagnostic(void)
316 pr_info("Current value of foo: %d\n", data_race(READ_ONCE(foo)));
319 Alternatively, given that KCSAN is to ignore all accesses in this function,
320 this function can be marked __no_kcsan and the data_race() can be dropped:
322 void __no_kcsan read_foo_diagnostic(void)
324 pr_info("Current value of foo: %d\n", READ_ONCE(foo));
327 However, in order for KCSAN to detect buggy lockless writes, your kernel
328 must be built with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n. If you
329 need KCSAN to detect such a write even if that write did not change
330 the value of foo, you also need CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY=n.
331 If you need KCSAN to detect such a write happening in an interrupt handler
332 running on the same CPU doing the legitimate lock-protected write, you
333 also need CONFIG_KCSAN_INTERRUPT_WATCHER=y. With some or all of these
334 Kconfig options set properly, KCSAN can be quite helpful, although
335 it is not necessarily a full replacement for hardware watchpoints.
336 On the other hand, neither are hardware watchpoints a full replacement
337 for KCSAN because it is not always easy to tell hardware watchpoint to
338 conditionally trap on accesses.
341 Lock-Protected Writes With Lockless Reads
342 -----------------------------------------
344 For another example, suppose a shared variable "foo" is updated only
345 while holding a spinlock, but is read locklessly. The code might look
349 DEFINE_SPINLOCK(foo_lock);
351 void update_foo(int newval)
353 spin_lock(&foo_lock);
354 WRITE_ONCE(foo, newval);
355 ASSERT_EXCLUSIVE_WRITER(foo);
356 do_something(newval);
357 spin_unlock(&foo_wlock);
363 return READ_ONCE(foo);
366 Because foo is read locklessly, all accesses are marked. The purpose
367 of the ASSERT_EXCLUSIVE_WRITER() is to allow KCSAN to check for a buggy
368 concurrent write, whether marked or not.
371 Lock-Protected Writes With Heuristic Lockless Reads
372 ---------------------------------------------------
374 For another example, suppose that the code can normally make use of
375 a per-data-structure lock, but there are times when a global lock
376 is required. These times are indicated via a global flag. The code
377 might look as follows, and is based loosely on nf_conntrack_lock(),
378 nf_conntrack_all_lock(), and nf_conntrack_all_unlock():
381 DEFINE_SPINLOCK(global_lock);
387 /* All foo structures are in the following array. */
389 struct foo *foo_array;
391 void do_something_locked(struct foo *fp)
393 /* This works even if data_race() returns nonsense. */
394 if (!data_race(global_flag)) {
395 spin_lock(&fp->f_lock);
396 if (!smp_load_acquire(&global_flag)) {
398 spin_unlock(&fp->f_lock);
401 spin_unlock(&fp->f_lock);
403 spin_lock(&global_lock);
404 /* global_lock held, thus global flag cannot be set. */
405 spin_lock(&fp->f_lock);
406 spin_unlock(&global_lock);
408 * global_flag might be set here, but begin_global()
409 * will wait for ->f_lock to be released.
412 spin_unlock(&fp->f_lock);
415 void begin_global(void)
419 spin_lock(&global_lock);
420 WRITE_ONCE(global_flag, true);
421 for (i = 0; i < nfoo; i++) {
423 * Wait for pre-existing local locks. One at
424 * a time to avoid lockdep limitations.
426 spin_lock(&fp->f_lock);
427 spin_unlock(&fp->f_lock);
431 void end_global(void)
433 smp_store_release(&global_flag, false);
434 spin_unlock(&global_lock);
437 All code paths leading from the do_something_locked() function's first
438 read from global_flag acquire a lock, so endless load fusing cannot
441 If the value read from global_flag is true, then global_flag is
442 rechecked while holding ->f_lock, which, if global_flag is now false,
443 prevents begin_global() from completing. It is therefore safe to invoke
446 Otherwise, if either value read from global_flag is true, then after
447 global_lock is acquired global_flag must be false. The acquisition of
448 ->f_lock will prevent any call to begin_global() from returning, which
449 means that it is safe to release global_lock and invoke do_something().
451 For this to work, only those foo structures in foo_array[] may be passed
452 to do_something_locked(). The reason for this is that the synchronization
453 with begin_global() relies on momentarily holding the lock of each and
456 The smp_load_acquire() and smp_store_release() are required because
457 changes to a foo structure between calls to begin_global() and
458 end_global() are carried out without holding that structure's ->f_lock.
459 The smp_load_acquire() and smp_store_release() ensure that the next
460 invocation of do_something() from do_something_locked() will see those
464 Lockless Reads and Writes
465 -------------------------
467 For another example, suppose a shared variable "foo" is both read and
468 updated locklessly. The code might look as follows:
472 int update_foo(int newval)
476 ret = xchg(&foo, newval);
477 do_something(newval);
484 return READ_ONCE(foo);
487 Because foo is accessed locklessly, all accesses are marked. It does
488 not make sense to use ASSERT_EXCLUSIVE_WRITER() in this case because
489 there really can be concurrent lockless writers. KCSAN would
490 flag any concurrent plain C-language reads from foo, and given
491 CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n, also any concurrent plain
492 C-language writes to foo.
495 Lockless Reads and Writes, But With Single-Threaded Initialization
496 ------------------------------------------------------------------
498 For yet another example, suppose that foo is initialized in a
499 single-threaded manner, but that a number of kthreads are then created
500 that locklessly and concurrently access foo. Some snippets of this code
501 might look as follows:
505 void initialize_foo(int initval, int nkthreads)
510 ASSERT_EXCLUSIVE_ACCESS(foo);
511 for (i = 0; i < nkthreads; i++)
512 kthread_run(access_foo_concurrently, ...);
515 /* Called from access_foo_concurrently(). */
516 int update_foo(int newval)
520 ret = xchg(&foo, newval);
521 do_something(newval);
525 /* Also called from access_foo_concurrently(). */
529 return READ_ONCE(foo);
532 The initialize_foo() uses a plain C-language write to foo because there
533 are not supposed to be concurrent accesses during initialization. The
534 ASSERT_EXCLUSIVE_ACCESS() allows KCSAN to flag buggy concurrent unmarked
535 reads, and the ASSERT_EXCLUSIVE_ACCESS() call further allows KCSAN to
536 flag buggy concurrent writes, even if: (1) Those writes are marked or
537 (2) The kernel was built with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=y.
540 Checking Stress-Test Race Coverage
541 ----------------------------------
543 When designing stress tests it is important to ensure that race conditions
544 of interest really do occur. For example, consider the following code
549 int update_foo(int newval)
551 return xchg(&foo, newval);
554 int xor_shift_foo(int shift, int mask)
556 int old, new, newold;
558 newold = data_race(foo); /* Checked by cmpxchg(). */
561 new = (old << shift) ^ mask;
562 newold = cmpxchg(&foo, old, new);
563 } while (newold != old);
569 return READ_ONCE(foo);
572 If it is possible for update_foo(), xor_shift_foo(), and read_foo() to be
573 invoked concurrently, the stress test should force this concurrency to
574 actually happen. KCSAN can evaluate the stress test when the above code
575 is modified to read as follows:
579 int update_foo(int newval)
581 ASSERT_EXCLUSIVE_ACCESS(foo);
582 return xchg(&foo, newval);
585 int xor_shift_foo(int shift, int mask)
587 int old, new, newold;
589 newold = data_race(foo); /* Checked by cmpxchg(). */
592 new = (old << shift) ^ mask;
593 ASSERT_EXCLUSIVE_ACCESS(foo);
594 newold = cmpxchg(&foo, old, new);
595 } while (newold != old);
602 ASSERT_EXCLUSIVE_ACCESS(foo);
603 return READ_ONCE(foo);
606 If a given stress-test run does not result in KCSAN complaints from
607 each possible pair of ASSERT_EXCLUSIVE_ACCESS() invocations, the
608 stress test needs improvement. If the stress test was to be evaluated
609 on a regular basis, it would be wise to place the above instances of
610 ASSERT_EXCLUSIVE_ACCESS() under #ifdef so that they did not result in
611 false positives when not evaluating the stress test.
617 [1] "Concurrency bugs should fear the big bad data-race detector (part 2)"
618 https://lwn.net/Articles/816854/
620 [2] "The Kernel Concurrency Sanitizer"
621 https://www.linuxfoundation.org/webinars/the-kernel-concurrency-sanitizer
623 [3] "Who's afraid of a big bad optimizing compiler?"
624 https://lwn.net/Articles/793253/