1 Kernel Lock Torture Test Operation
3 CONFIG_LOCK_TORTURE_TEST
5 The CONFIG LOCK_TORTURE_TEST config option provides a kernel module
6 that runs torture tests on core kernel locking primitives. The kernel
7 module, 'locktorture', may be built after the fact on the running
8 kernel to be tested, if desired. The tests periodically output status
9 messages via printk(), which can be examined via the dmesg (perhaps
10 grepping for "torture"). The test is started when the module is loaded,
11 and stops when the module is unloaded. This program is based on how RCU
12 is tortured, via rcutorture.
14 This torture test consists of creating a number of kernel threads which
15 acquire the lock and hold it for specific amount of time, thus simulating
16 different critical region behaviors. The amount of contention on the lock
17 can be simulated by either enlarging this critical region hold time and/or
18 creating more kthreads.
23 This module has the following parameters:
26 ** Locktorture-specific **
28 nwriters_stress Number of kernel threads that will stress exclusive lock
29 ownership (writers). The default value is twice the number
32 nreaders_stress Number of kernel threads that will stress shared lock
33 ownership (readers). The default is the same amount of writer
34 locks. If the user did not specify nwriters_stress, then
35 both readers and writers be the amount of online CPUs.
37 torture_type Type of lock to torture. By default, only spinlocks will
38 be tortured. This module can torture the following locks,
39 with string values as follows:
41 o "lock_busted": Simulates a buggy lock implementation.
43 o "spin_lock": spin_lock() and spin_unlock() pairs.
45 o "spin_lock_irq": spin_lock_irq() and spin_unlock_irq()
48 o "rw_lock": read/write lock() and unlock() rwlock pairs.
50 o "rw_lock_irq": read/write lock_irq() and unlock_irq()
53 o "mutex_lock": mutex_lock() and mutex_unlock() pairs.
55 o "rwsem_lock": read/write down() and up() semaphore pairs.
57 torture_runnable Start locktorture at boot time in the case where the
58 module is built into the kernel, otherwise wait for
59 torture_runnable to be set via sysfs before starting.
60 By default it will begin once the module is loaded.
63 ** Torture-framework (RCU + locking) **
65 shutdown_secs The number of seconds to run the test before terminating
66 the test and powering off the system. The default is
67 zero, which disables test termination and system shutdown.
68 This capability is useful for automated testing.
70 onoff_interval The number of seconds between each attempt to execute a
71 randomly selected CPU-hotplug operation. Defaults
72 to zero, which disables CPU hotplugging. In
73 CONFIG_HOTPLUG_CPU=n kernels, locktorture will silently
74 refuse to do any CPU-hotplug operations regardless of
75 what value is specified for onoff_interval.
77 onoff_holdoff The number of seconds to wait until starting CPU-hotplug
78 operations. This would normally only be used when
79 locktorture was built into the kernel and started
80 automatically at boot time, in which case it is useful
81 in order to avoid confusing boot-time code with CPUs
82 coming and going. This parameter is only useful if
83 CONFIG_HOTPLUG_CPU is enabled.
85 stat_interval Number of seconds between statistics-related printk()s.
86 By default, locktorture will report stats every 60 seconds.
87 Setting the interval to zero causes the statistics to
88 be printed -only- when the module is unloaded, and this
91 stutter The length of time to run the test before pausing for this
92 same period of time. Defaults to "stutter=5", so as
93 to run and pause for (roughly) five-second intervals.
94 Specifying "stutter=0" causes the test to run continuously
95 without pausing, which is the old default behavior.
97 shuffle_interval The number of seconds to keep the test threads affinitied
98 to a particular subset of the CPUs, defaults to 3 seconds.
99 Used in conjunction with test_no_idle_hz.
101 verbose Enable verbose debugging printing, via printk(). Enabled
102 by default. This extra information is mostly related to
103 high-level errors and reports from the main 'torture'
109 Statistics are printed in the following format:
111 spin_lock-torture: Writes: Total: 93746064 Max/Min: 0/0 Fail: 0
114 (A): Lock type that is being tortured -- torture_type parameter.
116 (B): Number of writer lock acquisitions. If dealing with a read/write primitive
117 a second "Reads" statistics line is printed.
119 (C): Number of times the lock was acquired.
121 (D): Min and max number of times threads failed to acquire the lock.
123 (E): true/false values if there were errors acquiring the lock. This should
124 -only- be positive if there is a bug in the locking primitive's
125 implementation. Otherwise a lock should never fail (i.e., spin_lock()).
126 Of course, the same applies for (C), above. A dummy example of this is
127 the "lock_busted" type.
131 The following script may be used to torture locks:
138 dmesg | grep torture:
140 The output can be manually inspected for the error flag of "!!!".
141 One could of course create a more elaborate script that automatically
142 checked for such errors. The "rmmod" command forces a "SUCCESS",
143 "FAILURE", or "RCU_HOTPLUG" indication to be printk()ed. The first
144 two are self-explanatory, while the last indicates that while there
145 were no locking failures, CPU-hotplug problems were detected.
147 Also see: Documentation/RCU/torture.txt