1 ===========================
2 Unreliable Guide To Locking
3 ===========================
10 Welcome, to Rusty's Remarkably Unreliable Guide to Kernel Locking
11 issues. This document describes the locking systems in the Linux Kernel
14 With the wide availability of HyperThreading, and preemption in the
15 Linux Kernel, everyone hacking on the kernel needs to know the
16 fundamentals of concurrency and locking for SMP.
18 The Problem With Concurrency
19 ============================
21 (Skip this if you know what a Race Condition is).
23 In a normal program, you can increment a counter like so:
27 very_important_count++;
30 This is what they would expect to happen:
33 .. table:: Expected Results
35 +------------------------------------+------------------------------------+
36 | Instance 1 | Instance 2 |
37 +====================================+====================================+
38 | read very_important_count (5) | |
39 +------------------------------------+------------------------------------+
41 +------------------------------------+------------------------------------+
42 | write very_important_count (6) | |
43 +------------------------------------+------------------------------------+
44 | | read very_important_count (6) |
45 +------------------------------------+------------------------------------+
47 +------------------------------------+------------------------------------+
48 | | write very_important_count (7) |
49 +------------------------------------+------------------------------------+
51 This is what might happen:
53 .. table:: Possible Results
55 +------------------------------------+------------------------------------+
56 | Instance 1 | Instance 2 |
57 +====================================+====================================+
58 | read very_important_count (5) | |
59 +------------------------------------+------------------------------------+
60 | | read very_important_count (5) |
61 +------------------------------------+------------------------------------+
63 +------------------------------------+------------------------------------+
65 +------------------------------------+------------------------------------+
66 | write very_important_count (6) | |
67 +------------------------------------+------------------------------------+
68 | | write very_important_count (6) |
69 +------------------------------------+------------------------------------+
72 Race Conditions and Critical Regions
73 ------------------------------------
75 This overlap, where the result depends on the relative timing of
76 multiple tasks, is called a race condition. The piece of code containing
77 the concurrency issue is called a critical region. And especially since
78 Linux starting running on SMP machines, they became one of the major
79 issues in kernel design and implementation.
81 Preemption can have the same effect, even if there is only one CPU: by
82 preempting one task during the critical region, we have exactly the same
83 race condition. In this case the thread which preempts might run the
84 critical region itself.
86 The solution is to recognize when these simultaneous accesses occur, and
87 use locks to make sure that only one instance can enter the critical
88 region at any time. There are many friendly primitives in the Linux
89 kernel to help you do this. And then there are the unfriendly
90 primitives, but I'll pretend they don't exist.
92 Locking in the Linux Kernel
93 ===========================
95 If I could give you one piece of advice: never sleep with anyone crazier
96 than yourself. But if I had to give you advice on locking: **keep it
99 Be reluctant to introduce new locks.
101 Strangely enough, this last one is the exact reverse of my advice when
102 you **have** slept with someone crazier than yourself. And you should
103 think about getting a big dog.
105 Two Main Types of Kernel Locks: Spinlocks and Mutexes
106 -----------------------------------------------------
108 There are two main types of kernel locks. The fundamental type is the
109 spinlock (``include/asm/spinlock.h``), which is a very simple
110 single-holder lock: if you can't get the spinlock, you keep trying
111 (spinning) until you can. Spinlocks are very small and fast, and can be
114 The second type is a mutex (``include/linux/mutex.h``): it is like a
115 spinlock, but you may block holding a mutex. If you can't lock a mutex,
116 your task will suspend itself, and be woken up when the mutex is
117 released. This means the CPU can do something else while you are
118 waiting. There are many cases when you simply can't sleep (see
119 `What Functions Are Safe To Call From Interrupts? <#sleeping-things>`__),
120 and so have to use a spinlock instead.
122 Neither type of lock is recursive: see
123 `Deadlock: Simple and Advanced <#deadlock>`__.
125 Locks and Uniprocessor Kernels
126 ------------------------------
128 For kernels compiled without ``CONFIG_SMP``, and without
129 ``CONFIG_PREEMPT`` spinlocks do not exist at all. This is an excellent
130 design decision: when no-one else can run at the same time, there is no
131 reason to have a lock.
133 If the kernel is compiled without ``CONFIG_SMP``, but ``CONFIG_PREEMPT``
134 is set, then spinlocks simply disable preemption, which is sufficient to
135 prevent any races. For most purposes, we can think of preemption as
136 equivalent to SMP, and not worry about it separately.
138 You should always test your locking code with ``CONFIG_SMP`` and
139 ``CONFIG_PREEMPT`` enabled, even if you don't have an SMP test box,
140 because it will still catch some kinds of locking bugs.
142 Mutexes still exist, because they are required for synchronization
143 between user contexts, as we will see below.
145 Locking Only In User Context
146 ----------------------------
148 If you have a data structure which is only ever accessed from user
149 context, then you can use a simple mutex (``include/linux/mutex.h``) to
150 protect it. This is the most trivial case: you initialize the mutex.
151 Then you can call :c:func:`mutex_lock_interruptible()` to grab the
152 mutex, and :c:func:`mutex_unlock()` to release it. There is also a
153 :c:func:`mutex_lock()`, which should be avoided, because it will
154 not return if a signal is received.
156 Example: ``net/netfilter/nf_sockopt.c`` allows registration of new
157 :c:func:`setsockopt()` and :c:func:`getsockopt()` calls, with
158 :c:func:`nf_register_sockopt()`. Registration and de-registration
159 are only done on module load and unload (and boot time, where there is
160 no concurrency), and the list of registrations is only consulted for an
161 unknown :c:func:`setsockopt()` or :c:func:`getsockopt()` system
162 call. The ``nf_sockopt_mutex`` is perfect to protect this, especially
163 since the setsockopt and getsockopt calls may well sleep.
165 Locking Between User Context and Softirqs
166 -----------------------------------------
168 If a softirq shares data with user context, you have two problems.
169 Firstly, the current user context can be interrupted by a softirq, and
170 secondly, the critical region could be entered from another CPU. This is
171 where :c:func:`spin_lock_bh()` (``include/linux/spinlock.h``) is
172 used. It disables softirqs on that CPU, then grabs the lock.
173 :c:func:`spin_unlock_bh()` does the reverse. (The '_bh' suffix is
174 a historical reference to "Bottom Halves", the old name for software
175 interrupts. It should really be called spin_lock_softirq()' in a
178 Note that you can also use :c:func:`spin_lock_irq()` or
179 :c:func:`spin_lock_irqsave()` here, which stop hardware interrupts
180 as well: see `Hard IRQ Context <#hardirq-context>`__.
182 This works perfectly for UP as well: the spin lock vanishes, and this
183 macro simply becomes :c:func:`local_bh_disable()`
184 (``include/linux/interrupt.h``), which protects you from the softirq
187 Locking Between User Context and Tasklets
188 -----------------------------------------
190 This is exactly the same as above, because tasklets are actually run
193 Locking Between User Context and Timers
194 ---------------------------------------
196 This, too, is exactly the same as above, because timers are actually run
197 from a softirq. From a locking point of view, tasklets and timers are
200 Locking Between Tasklets/Timers
201 -------------------------------
203 Sometimes a tasklet or timer might want to share data with another
206 The Same Tasklet/Timer
207 ~~~~~~~~~~~~~~~~~~~~~~
209 Since a tasklet is never run on two CPUs at once, you don't need to
210 worry about your tasklet being reentrant (running twice at once), even
213 Different Tasklets/Timers
214 ~~~~~~~~~~~~~~~~~~~~~~~~~
216 If another tasklet/timer wants to share data with your tasklet or timer
217 , you will both need to use :c:func:`spin_lock()` and
218 :c:func:`spin_unlock()` calls. :c:func:`spin_lock_bh()` is
219 unnecessary here, as you are already in a tasklet, and none will be run
222 Locking Between Softirqs
223 ------------------------
225 Often a softirq might want to share data with itself or a tasklet/timer.
230 The same softirq can run on the other CPUs: you can use a per-CPU array
231 (see `Per-CPU Data <#per-cpu>`__) for better performance. If you're
232 going so far as to use a softirq, you probably care about scalable
233 performance enough to justify the extra complexity.
235 You'll need to use :c:func:`spin_lock()` and
236 :c:func:`spin_unlock()` for shared data.
241 You'll need to use :c:func:`spin_lock()` and
242 :c:func:`spin_unlock()` for shared data, whether it be a timer,
243 tasklet, different softirq or the same or another softirq: any of them
244 could be running on a different CPU.
249 Hardware interrupts usually communicate with a tasklet or softirq.
250 Frequently this involves putting work in a queue, which the softirq will
253 Locking Between Hard IRQ and Softirqs/Tasklets
254 ----------------------------------------------
256 If a hardware irq handler shares data with a softirq, you have two
257 concerns. Firstly, the softirq processing can be interrupted by a
258 hardware interrupt, and secondly, the critical region could be entered
259 by a hardware interrupt on another CPU. This is where
260 :c:func:`spin_lock_irq()` is used. It is defined to disable
261 interrupts on that cpu, then grab the lock.
262 :c:func:`spin_unlock_irq()` does the reverse.
264 The irq handler does not to use :c:func:`spin_lock_irq()`, because
265 the softirq cannot run while the irq handler is running: it can use
266 :c:func:`spin_lock()`, which is slightly faster. The only exception
267 would be if a different hardware irq handler uses the same lock:
268 :c:func:`spin_lock_irq()` will stop that from interrupting us.
270 This works perfectly for UP as well: the spin lock vanishes, and this
271 macro simply becomes :c:func:`local_irq_disable()`
272 (``include/asm/smp.h``), which protects you from the softirq/tasklet/BH
275 :c:func:`spin_lock_irqsave()` (``include/linux/spinlock.h``) is a
276 variant which saves whether interrupts were on or off in a flags word,
277 which is passed to :c:func:`spin_unlock_irqrestore()`. This means
278 that the same code can be used inside an hard irq handler (where
279 interrupts are already off) and in softirqs (where the irq disabling is
282 Note that softirqs (and hence tasklets and timers) are run on return
283 from hardware interrupts, so :c:func:`spin_lock_irq()` also stops
284 these. In that sense, :c:func:`spin_lock_irqsave()` is the most
285 general and powerful locking function.
287 Locking Between Two Hard IRQ Handlers
288 -------------------------------------
290 It is rare to have to share data between two IRQ handlers, but if you
291 do, :c:func:`spin_lock_irqsave()` should be used: it is
292 architecture-specific whether all interrupts are disabled inside irq
295 Cheat Sheet For Locking
296 =======================
298 Pete Zaitcev gives the following summary:
300 - If you are in a process context (any syscall) and want to lock other
301 process out, use a mutex. You can take a mutex and sleep
302 (``copy_from_user*(`` or ``kmalloc(x,GFP_KERNEL)``).
304 - Otherwise (== data can be touched in an interrupt), use
305 :c:func:`spin_lock_irqsave()` and
306 :c:func:`spin_unlock_irqrestore()`.
308 - Avoid holding spinlock for more than 5 lines of code and across any
309 function call (except accessors like :c:func:`readb()`).
311 Table of Minimum Requirements
312 -----------------------------
314 The following table lists the **minimum** locking requirements between
315 various contexts. In some cases, the same context can only be running on
316 one CPU at a time, so no locking is required for that context (eg. a
317 particular thread can only run on one CPU at a time, but if it needs
318 shares data with another thread, locking is required).
320 Remember the advice above: you can always use
321 :c:func:`spin_lock_irqsave()`, which is a superset of all other
324 ============== ============= ============= ========= ========= ========= ========= ======= ======= ============== ==============
325 . IRQ Handler A IRQ Handler B Softirq A Softirq B Tasklet A Tasklet B Timer A Timer B User Context A User Context B
326 ============== ============= ============= ========= ========= ========= ========= ======= ======= ============== ==============
328 IRQ Handler B SLIS None
330 Softirq B SLI SLI SL SL
331 Tasklet A SLI SLI SL SL None
332 Tasklet B SLI SLI SL SL SL None
333 Timer A SLI SLI SL SL SL SL None
334 Timer B SLI SLI SL SL SL SL SL None
335 User Context A SLI SLI SLBH SLBH SLBH SLBH SLBH SLBH None
336 User Context B SLI SLI SLBH SLBH SLBH SLBH SLBH SLBH MLI None
337 ============== ============= ============= ========= ========= ========= ========= ======= ======= ============== ==============
339 Table: Table of Locking Requirements
341 +--------+----------------------------+
342 | SLIS | spin_lock_irqsave |
343 +--------+----------------------------+
344 | SLI | spin_lock_irq |
345 +--------+----------------------------+
347 +--------+----------------------------+
348 | SLBH | spin_lock_bh |
349 +--------+----------------------------+
350 | MLI | mutex_lock_interruptible |
351 +--------+----------------------------+
353 Table: Legend for Locking Requirements Table
355 The trylock Functions
356 =====================
358 There are functions that try to acquire a lock only once and immediately
359 return a value telling about success or failure to acquire the lock.
360 They can be used if you need no access to the data protected with the
361 lock when some other thread is holding the lock. You should acquire the
362 lock later if you then need access to the data protected with the lock.
364 :c:func:`spin_trylock()` does not spin but returns non-zero if it
365 acquires the spinlock on the first try or 0 if not. This function can be
366 used in all contexts like :c:func:`spin_lock()`: you must have
367 disabled the contexts that might interrupt you and acquire the spin
370 :c:func:`mutex_trylock()` does not suspend your task but returns
371 non-zero if it could lock the mutex on the first try or 0 if not. This
372 function cannot be safely used in hardware or software interrupt
373 contexts despite not sleeping.
378 Let's step through a simple example: a cache of number to name mappings.
379 The cache keeps a count of how often each of the objects is used, and
380 when it gets full, throws out the least used one.
385 For our first example, we assume that all operations are in user context
386 (ie. from system calls), so we can sleep. This means we can use a mutex
387 to protect the cache and all the objects within it. Here's the code::
389 #include <linux/list.h>
390 #include <linux/slab.h>
391 #include <linux/string.h>
392 #include <linux/mutex.h>
393 #include <asm/errno.h>
397 struct list_head list;
403 /* Protects the cache, cache_num, and the objects within it */
404 static DEFINE_MUTEX(cache_lock);
405 static LIST_HEAD(cache);
406 static unsigned int cache_num = 0;
407 #define MAX_CACHE_SIZE 10
409 /* Must be holding cache_lock */
410 static struct object *__cache_find(int id)
414 list_for_each_entry(i, &cache, list)
422 /* Must be holding cache_lock */
423 static void __cache_delete(struct object *obj)
426 list_del(&obj->list);
431 /* Must be holding cache_lock */
432 static void __cache_add(struct object *obj)
434 list_add(&obj->list, &cache);
435 if (++cache_num > MAX_CACHE_SIZE) {
436 struct object *i, *outcast = NULL;
437 list_for_each_entry(i, &cache, list) {
438 if (!outcast || i->popularity < outcast->popularity)
441 __cache_delete(outcast);
445 int cache_add(int id, const char *name)
449 if ((obj = kmalloc(sizeof(*obj), GFP_KERNEL)) == NULL)
452 strlcpy(obj->name, name, sizeof(obj->name));
456 mutex_lock(&cache_lock);
458 mutex_unlock(&cache_lock);
462 void cache_delete(int id)
464 mutex_lock(&cache_lock);
465 __cache_delete(__cache_find(id));
466 mutex_unlock(&cache_lock);
469 int cache_find(int id, char *name)
474 mutex_lock(&cache_lock);
475 obj = __cache_find(id);
478 strcpy(name, obj->name);
480 mutex_unlock(&cache_lock);
484 Note that we always make sure we have the cache_lock when we add,
485 delete, or look up the cache: both the cache infrastructure itself and
486 the contents of the objects are protected by the lock. In this case it's
487 easy, since we copy the data for the user, and never let them access the
490 There is a slight (and common) optimization here: in
491 :c:func:`cache_add()` we set up the fields of the object before
492 grabbing the lock. This is safe, as no-one else can access it until we
495 Accessing From Interrupt Context
496 --------------------------------
498 Now consider the case where :c:func:`cache_find()` can be called
499 from interrupt context: either a hardware interrupt or a softirq. An
500 example would be a timer which deletes object from the cache.
502 The change is shown below, in standard patch format: the ``-`` are lines
503 which are taken away, and the ``+`` are lines which are added.
507 --- cache.c.usercontext 2003-12-09 13:58:54.000000000 +1100
508 +++ cache.c.interrupt 2003-12-09 14:07:49.000000000 +1100
513 -static DEFINE_MUTEX(cache_lock);
514 +static DEFINE_SPINLOCK(cache_lock);
515 static LIST_HEAD(cache);
516 static unsigned int cache_num = 0;
517 #define MAX_CACHE_SIZE 10
519 int cache_add(int id, const char *name)
522 + unsigned long flags;
524 if ((obj = kmalloc(sizeof(*obj), GFP_KERNEL)) == NULL)
530 - mutex_lock(&cache_lock);
531 + spin_lock_irqsave(&cache_lock, flags);
533 - mutex_unlock(&cache_lock);
534 + spin_unlock_irqrestore(&cache_lock, flags);
538 void cache_delete(int id)
540 - mutex_lock(&cache_lock);
541 + unsigned long flags;
543 + spin_lock_irqsave(&cache_lock, flags);
544 __cache_delete(__cache_find(id));
545 - mutex_unlock(&cache_lock);
546 + spin_unlock_irqrestore(&cache_lock, flags);
549 int cache_find(int id, char *name)
553 + unsigned long flags;
555 - mutex_lock(&cache_lock);
556 + spin_lock_irqsave(&cache_lock, flags);
557 obj = __cache_find(id);
560 strcpy(name, obj->name);
562 - mutex_unlock(&cache_lock);
563 + spin_unlock_irqrestore(&cache_lock, flags);
567 Note that the :c:func:`spin_lock_irqsave()` will turn off
568 interrupts if they are on, otherwise does nothing (if we are already in
569 an interrupt handler), hence these functions are safe to call from any
572 Unfortunately, :c:func:`cache_add()` calls :c:func:`kmalloc()`
573 with the ``GFP_KERNEL`` flag, which is only legal in user context. I
574 have assumed that :c:func:`cache_add()` is still only called in
575 user context, otherwise this should become a parameter to
576 :c:func:`cache_add()`.
578 Exposing Objects Outside This File
579 ----------------------------------
581 If our objects contained more information, it might not be sufficient to
582 copy the information in and out: other parts of the code might want to
583 keep pointers to these objects, for example, rather than looking up the
584 id every time. This produces two problems.
586 The first problem is that we use the ``cache_lock`` to protect objects:
587 we'd need to make this non-static so the rest of the code can use it.
588 This makes locking trickier, as it is no longer all in one place.
590 The second problem is the lifetime problem: if another structure keeps a
591 pointer to an object, it presumably expects that pointer to remain
592 valid. Unfortunately, this is only guaranteed while you hold the lock,
593 otherwise someone might call :c:func:`cache_delete()` and even
594 worse, add another object, re-using the same address.
596 As there is only one lock, you can't hold it forever: no-one else would
599 The solution to this problem is to use a reference count: everyone who
600 has a pointer to the object increases it when they first get the object,
601 and drops the reference count when they're finished with it. Whoever
602 drops it to zero knows it is unused, and can actually delete it.
606 --- cache.c.interrupt 2003-12-09 14:25:43.000000000 +1100
607 +++ cache.c.refcnt 2003-12-09 14:33:05.000000000 +1100
611 struct list_head list;
612 + unsigned int refcnt;
617 static unsigned int cache_num = 0;
618 #define MAX_CACHE_SIZE 10
620 +static void __object_put(struct object *obj)
622 + if (--obj->refcnt == 0)
626 +static void __object_get(struct object *obj)
631 +void object_put(struct object *obj)
633 + unsigned long flags;
635 + spin_lock_irqsave(&cache_lock, flags);
637 + spin_unlock_irqrestore(&cache_lock, flags);
640 +void object_get(struct object *obj)
642 + unsigned long flags;
644 + spin_lock_irqsave(&cache_lock, flags);
646 + spin_unlock_irqrestore(&cache_lock, flags);
649 /* Must be holding cache_lock */
650 static struct object *__cache_find(int id)
655 list_del(&obj->list);
661 strlcpy(obj->name, name, sizeof(obj->name));
664 + obj->refcnt = 1; /* The cache holds a reference */
666 spin_lock_irqsave(&cache_lock, flags);
669 spin_unlock_irqrestore(&cache_lock, flags);
672 -int cache_find(int id, char *name)
673 +struct object *cache_find(int id)
679 spin_lock_irqsave(&cache_lock, flags);
680 obj = __cache_find(id);
683 - strcpy(name, obj->name);
687 spin_unlock_irqrestore(&cache_lock, flags);
692 We encapsulate the reference counting in the standard 'get' and 'put'
693 functions. Now we can return the object itself from
694 :c:func:`cache_find()` which has the advantage that the user can
695 now sleep holding the object (eg. to :c:func:`copy_to_user()` to
698 The other point to note is that I said a reference should be held for
699 every pointer to the object: thus the reference count is 1 when first
700 inserted into the cache. In some versions the framework does not hold a
701 reference count, but they are more complicated.
703 Using Atomic Operations For The Reference Count
704 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
706 In practice, :c:type:`atomic_t` would usually be used for refcnt. There are a
707 number of atomic operations defined in ``include/asm/atomic.h``: these
708 are guaranteed to be seen atomically from all CPUs in the system, so no
709 lock is required. In this case, it is simpler than using spinlocks,
710 although for anything non-trivial using spinlocks is clearer. The
711 :c:func:`atomic_inc()` and :c:func:`atomic_dec_and_test()`
712 are used instead of the standard increment and decrement operators, and
713 the lock is no longer used to protect the reference count itself.
717 --- cache.c.refcnt 2003-12-09 15:00:35.000000000 +1100
718 +++ cache.c.refcnt-atomic 2003-12-11 15:49:42.000000000 +1100
722 struct list_head list;
723 - unsigned int refcnt;
729 static unsigned int cache_num = 0;
730 #define MAX_CACHE_SIZE 10
732 -static void __object_put(struct object *obj)
734 - if (--obj->refcnt == 0)
738 -static void __object_get(struct object *obj)
743 void object_put(struct object *obj)
745 - unsigned long flags;
747 - spin_lock_irqsave(&cache_lock, flags);
749 - spin_unlock_irqrestore(&cache_lock, flags);
750 + if (atomic_dec_and_test(&obj->refcnt))
754 void object_get(struct object *obj)
756 - unsigned long flags;
758 - spin_lock_irqsave(&cache_lock, flags);
760 - spin_unlock_irqrestore(&cache_lock, flags);
761 + atomic_inc(&obj->refcnt);
764 /* Must be holding cache_lock */
768 list_del(&obj->list);
775 strlcpy(obj->name, name, sizeof(obj->name));
778 - obj->refcnt = 1; /* The cache holds a reference */
779 + atomic_set(&obj->refcnt, 1); /* The cache holds a reference */
781 spin_lock_irqsave(&cache_lock, flags);
784 spin_lock_irqsave(&cache_lock, flags);
785 obj = __cache_find(id);
789 spin_unlock_irqrestore(&cache_lock, flags);
793 Protecting The Objects Themselves
794 ---------------------------------
796 In these examples, we assumed that the objects (except the reference
797 counts) never changed once they are created. If we wanted to allow the
798 name to change, there are three possibilities:
800 - You can make ``cache_lock`` non-static, and tell people to grab that
801 lock before changing the name in any object.
803 - You can provide a :c:func:`cache_obj_rename()` which grabs this
804 lock and changes the name for the caller, and tell everyone to use
807 - You can make the ``cache_lock`` protect only the cache itself, and
808 use another lock to protect the name.
810 Theoretically, you can make the locks as fine-grained as one lock for
811 every field, for every object. In practice, the most common variants
814 - One lock which protects the infrastructure (the ``cache`` list in
815 this example) and all the objects. This is what we have done so far.
817 - One lock which protects the infrastructure (including the list
818 pointers inside the objects), and one lock inside the object which
819 protects the rest of that object.
821 - Multiple locks to protect the infrastructure (eg. one lock per hash
822 chain), possibly with a separate per-object lock.
824 Here is the "lock-per-object" implementation:
828 --- cache.c.refcnt-atomic 2003-12-11 15:50:54.000000000 +1100
829 +++ cache.c.perobjectlock 2003-12-11 17:15:03.000000000 +1100
834 + /* These two protected by cache_lock. */
835 struct list_head list;
840 + /* Doesn't change once created. */
843 + spinlock_t lock; /* Protects the name */
848 static DEFINE_SPINLOCK(cache_lock);
852 atomic_set(&obj->refcnt, 1); /* The cache holds a reference */
853 + spin_lock_init(&obj->lock);
855 spin_lock_irqsave(&cache_lock, flags);
858 Note that I decide that the popularity count should be protected by the
859 ``cache_lock`` rather than the per-object lock: this is because it (like
860 the :c:type:`struct list_head <list_head>` inside the object)
861 is logically part of the infrastructure. This way, I don't need to grab
862 the lock of every object in :c:func:`__cache_add()` when seeking
865 I also decided that the id member is unchangeable, so I don't need to
866 grab each object lock in :c:func:`__cache_find()` to examine the
867 id: the object lock is only used by a caller who wants to read or write
870 Note also that I added a comment describing what data was protected by
871 which locks. This is extremely important, as it describes the runtime
872 behavior of the code, and can be hard to gain from just reading. And as
873 Alan Cox says, “Lock data, not code”.
878 Deadlock: Simple and Advanced
879 -----------------------------
881 There is a coding bug where a piece of code tries to grab a spinlock
882 twice: it will spin forever, waiting for the lock to be released
883 (spinlocks, rwlocks and mutexes are not recursive in Linux). This is
884 trivial to diagnose: not a
885 stay-up-five-nights-talk-to-fluffy-code-bunnies kind of problem.
887 For a slightly more complex case, imagine you have a region shared by a
888 softirq and user context. If you use a :c:func:`spin_lock()` call
889 to protect it, it is possible that the user context will be interrupted
890 by the softirq while it holds the lock, and the softirq will then spin
891 forever trying to get the same lock.
893 Both of these are called deadlock, and as shown above, it can occur even
894 with a single CPU (although not on UP compiles, since spinlocks vanish
895 on kernel compiles with ``CONFIG_SMP``\ =n. You'll still get data
896 corruption in the second example).
898 This complete lockup is easy to diagnose: on SMP boxes the watchdog
899 timer or compiling with ``DEBUG_SPINLOCK`` set
900 (``include/linux/spinlock.h``) will show this up immediately when it
903 A more complex problem is the so-called 'deadly embrace', involving two
904 or more locks. Say you have a hash table: each entry in the table is a
905 spinlock, and a chain of hashed objects. Inside a softirq handler, you
906 sometimes want to alter an object from one place in the hash to another:
907 you grab the spinlock of the old hash chain and the spinlock of the new
908 hash chain, and delete the object from the old one, and insert it in the
911 There are two problems here. First, if your code ever tries to move the
912 object to the same chain, it will deadlock with itself as it tries to
913 lock it twice. Secondly, if the same softirq on another CPU is trying to
914 move another object in the reverse direction, the following could
917 +-----------------------+-----------------------+
919 +=======================+=======================+
920 | Grab lock A -> OK | Grab lock B -> OK |
921 +-----------------------+-----------------------+
922 | Grab lock B -> spin | Grab lock A -> spin |
923 +-----------------------+-----------------------+
927 The two CPUs will spin forever, waiting for the other to give up their
928 lock. It will look, smell, and feel like a crash.
933 Textbooks will tell you that if you always lock in the same order, you
934 will never get this kind of deadlock. Practice will tell you that this
935 approach doesn't scale: when I create a new lock, I don't understand
936 enough of the kernel to figure out where in the 5000 lock hierarchy it
939 The best locks are encapsulated: they never get exposed in headers, and
940 are never held around calls to non-trivial functions outside the same
941 file. You can read through this code and see that it will never
942 deadlock, because it never tries to grab another lock while it has that
943 one. People using your code don't even need to know you are using a
946 A classic problem here is when you provide callbacks or hooks: if you
947 call these with the lock held, you risk simple deadlock, or a deadly
948 embrace (who knows what the callback will do?). Remember, the other
949 programmers are out to get you, so don't do this.
951 Overzealous Prevention Of Deadlocks
952 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
954 Deadlocks are problematic, but not as bad as data corruption. Code which
955 grabs a read lock, searches a list, fails to find what it wants, drops
956 the read lock, grabs a write lock and inserts the object has a race
959 If you don't see why, please stay the fuck away from my code.
961 Racing Timers: A Kernel Pastime
962 -------------------------------
964 Timers can produce their own special problems with races. Consider a
965 collection of objects (list, hash, etc) where each object has a timer
966 which is due to destroy it.
968 If you want to destroy the entire collection (say on module removal),
969 you might do the following::
971 /* THIS CODE BAD BAD BAD BAD: IF IT WAS ANY WORSE IT WOULD USE
972 HUNGARIAN NOTATION */
973 spin_lock_bh(&list_lock);
976 struct foo *next = list->next;
977 del_timer(&list->timer);
982 spin_unlock_bh(&list_lock);
985 Sooner or later, this will crash on SMP, because a timer can have just
986 gone off before the :c:func:`spin_lock_bh()`, and it will only get
987 the lock after we :c:func:`spin_unlock_bh()`, and then try to free
988 the element (which has already been freed!).
990 This can be avoided by checking the result of
991 :c:func:`del_timer()`: if it returns 1, the timer has been deleted.
992 If 0, it means (in this case) that it is currently running, so we can
996 spin_lock_bh(&list_lock);
999 struct foo *next = list->next;
1000 if (!del_timer(&list->timer)) {
1001 /* Give timer a chance to delete this */
1002 spin_unlock_bh(&list_lock);
1009 spin_unlock_bh(&list_lock);
1012 Another common problem is deleting timers which restart themselves (by
1013 calling :c:func:`add_timer()` at the end of their timer function).
1014 Because this is a fairly common case which is prone to races, you should
1015 use :c:func:`del_timer_sync()` (``include/linux/timer.h``) to
1016 handle this case. It returns the number of times the timer had to be
1017 deleted before we finally stopped it from adding itself back in.
1022 There are three main things to worry about when considering speed of
1023 some code which does locking. First is concurrency: how many things are
1024 going to be waiting while someone else is holding a lock. Second is the
1025 time taken to actually acquire and release an uncontended lock. Third is
1026 using fewer, or smarter locks. I'm assuming that the lock is used fairly
1027 often: otherwise, you wouldn't be concerned about efficiency.
1029 Concurrency depends on how long the lock is usually held: you should
1030 hold the lock for as long as needed, but no longer. In the cache
1031 example, we always create the object without the lock held, and then
1032 grab the lock only when we are ready to insert it in the list.
1034 Acquisition times depend on how much damage the lock operations do to
1035 the pipeline (pipeline stalls) and how likely it is that this CPU was
1036 the last one to grab the lock (ie. is the lock cache-hot for this CPU):
1037 on a machine with more CPUs, this likelihood drops fast. Consider a
1038 700MHz Intel Pentium III: an instruction takes about 0.7ns, an atomic
1039 increment takes about 58ns, a lock which is cache-hot on this CPU takes
1040 160ns, and a cacheline transfer from another CPU takes an additional 170
1041 to 360ns. (These figures from Paul McKenney's `Linux Journal RCU
1042 article <http://www.linuxjournal.com/article.php?sid=6993>`__).
1044 These two aims conflict: holding a lock for a short time might be done
1045 by splitting locks into parts (such as in our final per-object-lock
1046 example), but this increases the number of lock acquisitions, and the
1047 results are often slower than having a single lock. This is another
1048 reason to advocate locking simplicity.
1050 The third concern is addressed below: there are some methods to reduce
1051 the amount of locking which needs to be done.
1053 Read/Write Lock Variants
1054 ------------------------
1056 Both spinlocks and mutexes have read/write variants: ``rwlock_t`` and
1057 :c:type:`struct rw_semaphore <rw_semaphore>`. These divide
1058 users into two classes: the readers and the writers. If you are only
1059 reading the data, you can get a read lock, but to write to the data you
1060 need the write lock. Many people can hold a read lock, but a writer must
1063 If your code divides neatly along reader/writer lines (as our cache code
1064 does), and the lock is held by readers for significant lengths of time,
1065 using these locks can help. They are slightly slower than the normal
1066 locks though, so in practice ``rwlock_t`` is not usually worthwhile.
1068 Avoiding Locks: Read Copy Update
1069 --------------------------------
1071 There is a special method of read/write locking called Read Copy Update.
1072 Using RCU, the readers can avoid taking a lock altogether: as we expect
1073 our cache to be read more often than updated (otherwise the cache is a
1074 waste of time), it is a candidate for this optimization.
1076 How do we get rid of read locks? Getting rid of read locks means that
1077 writers may be changing the list underneath the readers. That is
1078 actually quite simple: we can read a linked list while an element is
1079 being added if the writer adds the element very carefully. For example,
1080 adding ``new`` to a single linked list called ``list``::
1082 new->next = list->next;
1087 The :c:func:`wmb()` is a write memory barrier. It ensures that the
1088 first operation (setting the new element's ``next`` pointer) is complete
1089 and will be seen by all CPUs, before the second operation is (putting
1090 the new element into the list). This is important, since modern
1091 compilers and modern CPUs can both reorder instructions unless told
1092 otherwise: we want a reader to either not see the new element at all, or
1093 see the new element with the ``next`` pointer correctly pointing at the
1096 Fortunately, there is a function to do this for standard
1097 :c:type:`struct list_head <list_head>` lists:
1098 :c:func:`list_add_rcu()` (``include/linux/list.h``).
1100 Removing an element from the list is even simpler: we replace the
1101 pointer to the old element with a pointer to its successor, and readers
1102 will either see it, or skip over it.
1106 list->next = old->next;
1109 There is :c:func:`list_del_rcu()` (``include/linux/list.h``) which
1110 does this (the normal version poisons the old object, which we don't
1113 The reader must also be careful: some CPUs can look through the ``next``
1114 pointer to start reading the contents of the next element early, but
1115 don't realize that the pre-fetched contents is wrong when the ``next``
1116 pointer changes underneath them. Once again, there is a
1117 :c:func:`list_for_each_entry_rcu()` (``include/linux/list.h``)
1118 to help you. Of course, writers can just use
1119 :c:func:`list_for_each_entry()`, since there cannot be two
1120 simultaneous writers.
1122 Our final dilemma is this: when can we actually destroy the removed
1123 element? Remember, a reader might be stepping through this element in
1124 the list right now: if we free this element and the ``next`` pointer
1125 changes, the reader will jump off into garbage and crash. We need to
1126 wait until we know that all the readers who were traversing the list
1127 when we deleted the element are finished. We use
1128 :c:func:`call_rcu()` to register a callback which will actually
1129 destroy the object once all pre-existing readers are finished.
1130 Alternatively, :c:func:`synchronize_rcu()` may be used to block
1131 until all pre-existing are finished.
1133 But how does Read Copy Update know when the readers are finished? The
1134 method is this: firstly, the readers always traverse the list inside
1135 :c:func:`rcu_read_lock()`/:c:func:`rcu_read_unlock()` pairs:
1136 these simply disable preemption so the reader won't go to sleep while
1139 RCU then waits until every other CPU has slept at least once: since
1140 readers cannot sleep, we know that any readers which were traversing the
1141 list during the deletion are finished, and the callback is triggered.
1142 The real Read Copy Update code is a little more optimized than this, but
1143 this is the fundamental idea.
1147 --- cache.c.perobjectlock 2003-12-11 17:15:03.000000000 +1100
1148 +++ cache.c.rcupdate 2003-12-11 17:55:14.000000000 +1100
1150 #include <linux/list.h>
1151 #include <linux/slab.h>
1152 #include <linux/string.h>
1153 +#include <linux/rcupdate.h>
1154 #include <linux/mutex.h>
1155 #include <asm/errno.h>
1159 - /* These two protected by cache_lock. */
1160 + /* This is protected by RCU */
1161 struct list_head list;
1164 + struct rcu_head rcu;
1168 /* Doesn't change once created. */
1173 - list_for_each_entry(i, &cache, list) {
1174 + list_for_each_entry_rcu(i, &cache, list) {
1182 +/* Final discard done once we know no readers are looking. */
1183 +static void cache_delete_rcu(void *arg)
1188 /* Must be holding cache_lock */
1189 static void __cache_delete(struct object *obj)
1192 - list_del(&obj->list);
1194 + list_del_rcu(&obj->list);
1196 + call_rcu(&obj->rcu, cache_delete_rcu);
1199 /* Must be holding cache_lock */
1200 static void __cache_add(struct object *obj)
1202 - list_add(&obj->list, &cache);
1203 + list_add_rcu(&obj->list, &cache);
1204 if (++cache_num > MAX_CACHE_SIZE) {
1205 struct object *i, *outcast = NULL;
1206 list_for_each_entry(i, &cache, list) {
1207 @@ -104,12 +114,11 @@
1208 struct object *cache_find(int id)
1211 - unsigned long flags;
1213 - spin_lock_irqsave(&cache_lock, flags);
1215 obj = __cache_find(id);
1218 - spin_unlock_irqrestore(&cache_lock, flags);
1219 + rcu_read_unlock();
1223 Note that the reader will alter the popularity member in
1224 :c:func:`__cache_find()`, and now it doesn't hold a lock. One
1225 solution would be to make it an ``atomic_t``, but for this usage, we
1226 don't really care about races: an approximate result is good enough, so
1229 The result is that :c:func:`cache_find()` requires no
1230 synchronization with any other functions, so is almost as fast on SMP as
1233 There is a further optimization possible here: remember our original
1234 cache code, where there were no reference counts and the caller simply
1235 held the lock whenever using the object? This is still possible: if you
1236 hold the lock, no one can delete the object, so you don't need to get
1237 and put the reference count.
1239 Now, because the 'read lock' in RCU is simply disabling preemption, a
1240 caller which always has preemption disabled between calling
1241 :c:func:`cache_find()` and :c:func:`object_put()` does not
1242 need to actually get and put the reference count: we could expose
1243 :c:func:`__cache_find()` by making it non-static, and such
1244 callers could simply call that.
1246 The benefit here is that the reference count is not written to: the
1247 object is not altered in any way, which is much faster on SMP machines
1253 Another technique for avoiding locking which is used fairly widely is to
1254 duplicate information for each CPU. For example, if you wanted to keep a
1255 count of a common condition, you could use a spin lock and a single
1256 counter. Nice and simple.
1258 If that was too slow (it's usually not, but if you've got a really big
1259 machine to test on and can show that it is), you could instead use a
1260 counter for each CPU, then none of them need an exclusive lock. See
1261 :c:func:`DEFINE_PER_CPU()`, :c:func:`get_cpu_var()` and
1262 :c:func:`put_cpu_var()` (``include/linux/percpu.h``).
1264 Of particular use for simple per-cpu counters is the ``local_t`` type,
1265 and the :c:func:`cpu_local_inc()` and related functions, which are
1266 more efficient than simple code on some architectures
1267 (``include/asm/local.h``).
1269 Note that there is no simple, reliable way of getting an exact value of
1270 such a counter, without introducing more locks. This is not a problem
1273 Data Which Mostly Used By An IRQ Handler
1274 ----------------------------------------
1276 If data is always accessed from within the same IRQ handler, you don't
1277 need a lock at all: the kernel already guarantees that the irq handler
1278 will not run simultaneously on multiple CPUs.
1280 Manfred Spraul points out that you can still do this, even if the data
1281 is very occasionally accessed in user context or softirqs/tasklets. The
1282 irq handler doesn't use a lock, and all other accesses are done as so::
1290 The :c:func:`disable_irq()` prevents the irq handler from running
1291 (and waits for it to finish if it's currently running on other CPUs).
1292 The spinlock prevents any other accesses happening at the same time.
1293 Naturally, this is slower than just a :c:func:`spin_lock_irq()`
1294 call, so it only makes sense if this type of access happens extremely
1297 What Functions Are Safe To Call From Interrupts?
1298 ================================================
1300 Many functions in the kernel sleep (ie. call schedule()) directly or
1301 indirectly: you can never call them while holding a spinlock, or with
1302 preemption disabled. This also means you need to be in user context:
1303 calling them from an interrupt is illegal.
1305 Some Functions Which Sleep
1306 --------------------------
1308 The most common ones are listed below, but you usually have to read the
1309 code to find out if other calls are safe. If everyone else who calls it
1310 can sleep, you probably need to be able to sleep, too. In particular,
1311 registration and deregistration functions usually expect to be called
1312 from user context, and can sleep.
1314 - Accesses to userspace:
1316 - :c:func:`copy_from_user()`
1318 - :c:func:`copy_to_user()`
1320 - :c:func:`get_user()`
1322 - :c:func:`put_user()`
1324 - :c:func:`kmalloc(GFP_KERNEL) <kmalloc>`
1326 - :c:func:`mutex_lock_interruptible()` and
1327 :c:func:`mutex_lock()`
1329 There is a :c:func:`mutex_trylock()` which does not sleep.
1330 Still, it must not be used inside interrupt context since its
1331 implementation is not safe for that. :c:func:`mutex_unlock()`
1332 will also never sleep. It cannot be used in interrupt context either
1333 since a mutex must be released by the same task that acquired it.
1335 Some Functions Which Don't Sleep
1336 --------------------------------
1338 Some functions are safe to call from any context, or holding almost any
1341 - :c:func:`printk()`
1345 - :c:func:`add_timer()` and :c:func:`del_timer()`
1350 .. kernel-doc:: include/linux/mutex.h
1353 .. kernel-doc:: kernel/locking/mutex.c
1359 .. kernel-doc:: kernel/futex.c
1365 - ``Documentation/locking/spinlocks.txt``: Linus Torvalds' spinlocking
1366 tutorial in the kernel sources.
1368 - Unix Systems for Modern Architectures: Symmetric Multiprocessing and
1369 Caching for Kernel Programmers:
1371 Curt Schimmel's very good introduction to kernel level locking (not
1372 written for Linux, but nearly everything applies). The book is
1373 expensive, but really worth every penny to understand SMP locking.
1379 Thanks to Telsa Gwynne for DocBooking, neatening and adding style.
1381 Thanks to Martin Pool, Philipp Rumpf, Stephen Rothwell, Paul Mackerras,
1382 Ruedi Aschwanden, Alan Cox, Manfred Spraul, Tim Waugh, Pete Zaitcev,
1383 James Morris, Robert Love, Paul McKenney, John Ashby for proofreading,
1384 correcting, flaming, commenting.
1386 Thanks to the cabal for having no influence on this document.
1392 Prior to 2.5, or when ``CONFIG_PREEMPT`` is unset, processes in user
1393 context inside the kernel would not preempt each other (ie. you had that
1394 CPU until you gave it up, except for interrupts). With the addition of
1395 ``CONFIG_PREEMPT`` in 2.5.4, this changed: when in user context, higher
1396 priority tasks can "cut in": spinlocks were changed to disable
1397 preemption, even on UP.
1400 Bottom Half: for historical reasons, functions with '_bh' in them often
1401 now refer to any software interrupt, e.g. :c:func:`spin_lock_bh()`
1402 blocks any software interrupt on the current CPU. Bottom halves are
1403 deprecated, and will eventually be replaced by tasklets. Only one bottom
1404 half will be running at any time.
1406 Hardware Interrupt / Hardware IRQ
1407 Hardware interrupt request. :c:func:`in_irq()` returns true in a
1408 hardware interrupt handler.
1411 Not user context: processing a hardware irq or software irq. Indicated
1412 by the :c:func:`in_interrupt()` macro returning true.
1415 Symmetric Multi-Processor: kernels compiled for multiple-CPU machines.
1418 Software Interrupt / softirq
1419 Software interrupt handler. :c:func:`in_irq()` returns false;
1420 :c:func:`in_softirq()` returns true. Tasklets and softirqs both
1421 fall into the category of 'software interrupts'.
1423 Strictly speaking a softirq is one of up to 32 enumerated software
1424 interrupts which can run on multiple CPUs at once. Sometimes used to
1425 refer to tasklets as well (ie. all software interrupts).
1428 A dynamically-registrable software interrupt, which is guaranteed to
1429 only run on one CPU at a time.
1432 A dynamically-registrable software interrupt, which is run at (or close
1433 to) a given time. When running, it is just like a tasklet (in fact, they
1434 are called from the ``TIMER_SOFTIRQ``).
1437 Uni-Processor: Non-SMP. (``CONFIG_SMP=n``).
1440 The kernel executing on behalf of a particular process (ie. a system
1441 call or trap) or kernel thread. You can tell which process with the
1442 ``current`` macro.) Not to be confused with userspace. Can be
1443 interrupted by software or hardware interrupts.
1446 A process executing its own code outside the kernel.