4 [ This document only discusses CPU bandwidth control for SCHED_NORMAL.
5 The SCHED_RT case is covered in Documentation/scheduler/sched-rt-group.txt ]
7 CFS bandwidth control is a CONFIG_FAIR_GROUP_SCHED extension which allows the
8 specification of the maximum CPU bandwidth available to a group or hierarchy.
10 The bandwidth allowed for a group is specified using a quota and period. Within
11 each given "period" (microseconds), a group is allowed to consume only up to
12 "quota" microseconds of CPU time. When the CPU bandwidth consumption of a
13 group exceeds this limit (for that period), the tasks belonging to its
14 hierarchy will be throttled and are not allowed to run again until the next
17 A group's unused runtime is globally tracked, being refreshed with quota units
18 above at each period boundary. As threads consume this bandwidth it is
19 transferred to cpu-local "silos" on a demand basis. The amount transferred
20 within each of these updates is tunable and described as the "slice".
24 Quota and period are managed within the cpu subsystem via cgroupfs.
26 cpu.cfs_quota_us: the total available run-time within a period (in microseconds)
27 cpu.cfs_period_us: the length of a period (in microseconds)
28 cpu.stat: exports throttling statistics [explained further below]
30 The default values are:
31 cpu.cfs_period_us=100ms
34 A value of -1 for cpu.cfs_quota_us indicates that the group does not have any
35 bandwidth restriction in place, such a group is described as an unconstrained
36 bandwidth group. This represents the traditional work-conserving behavior for
39 Writing any (valid) positive value(s) will enact the specified bandwidth limit.
40 The minimum quota allowed for the quota or period is 1ms. There is also an
41 upper bound on the period length of 1s. Additional restrictions exist when
42 bandwidth limits are used in a hierarchical fashion, these are explained in
45 Writing any negative value to cpu.cfs_quota_us will remove the bandwidth limit
46 and return the group to an unconstrained state once more.
48 Any updates to a group's bandwidth specification will result in it becoming
49 unthrottled if it is in a constrained state.
53 For efficiency run-time is transferred between the global pool and CPU local
54 "silos" in a batch fashion. This greatly reduces global accounting pressure
55 on large systems. The amount transferred each time such an update is required
56 is described as the "slice".
58 This is tunable via procfs:
59 /proc/sys/kernel/sched_cfs_bandwidth_slice_us (default=5ms)
61 Larger slice values will reduce transfer overheads, while smaller values allow
62 for more fine-grained consumption.
66 A group's bandwidth statistics are exported via 3 fields in cpu.stat.
69 - nr_periods: Number of enforcement intervals that have elapsed.
70 - nr_throttled: Number of times the group has been throttled/limited.
71 - throttled_time: The total time duration (in nanoseconds) for which entities
72 of the group have been throttled.
74 This interface is read-only.
76 Hierarchical considerations
77 ---------------------------
78 The interface enforces that an individual entity's bandwidth is always
79 attainable, that is: max(c_i) <= C. However, over-subscription in the
80 aggregate case is explicitly allowed to enable work-conserving semantics
82 e.g. \Sum (c_i) may exceed C
83 [ Where C is the parent's bandwidth, and c_i its children ]
86 There are two ways in which a group may become throttled:
87 a. it fully consumes its own quota within a period
88 b. a parent's quota is fully consumed within its period
90 In case b) above, even though the child may have runtime remaining it will not
91 be allowed to until the parent's runtime is refreshed.
93 CFS Bandwidth Quota Caveats
94 ---------------------------
95 Once a slice is assigned to a cpu it does not expire. However all but 1ms of
96 the slice may be returned to the global pool if all threads on that cpu become
97 unrunnable. This is configured at compile time by the min_cfs_rq_runtime
98 variable. This is a performance tweak that helps prevent added contention on
101 The fact that cpu-local slices do not expire results in some interesting corner
102 cases that should be understood.
104 For cgroup cpu constrained applications that are cpu limited this is a
105 relatively moot point because they will naturally consume the entirety of their
106 quota as well as the entirety of each cpu-local slice in each period. As a
107 result it is expected that nr_periods roughly equal nr_throttled, and that
108 cpuacct.usage will increase roughly equal to cfs_quota_us in each period.
110 For highly-threaded, non-cpu bound applications this non-expiration nuance
111 allows applications to briefly burst past their quota limits by the amount of
112 unused slice on each cpu that the task group is running on (typically at most
113 1ms per cpu or as defined by min_cfs_rq_runtime). This slight burst only
114 applies if quota had been assigned to a cpu and then not fully used or returned
115 in previous periods. This burst amount will not be transferred between cores.
116 As a result, this mechanism still strictly limits the task group to quota
117 average usage, albeit over a longer time window than a single period. This
118 also limits the burst ability to no more than 1ms per cpu. This provides
119 better more predictable user experience for highly threaded applications with
120 small quota limits on high core count machines. It also eliminates the
121 propensity to throttle these applications while simultanously using less than
122 quota amounts of cpu. Another way to say this, is that by allowing the unused
123 portion of a slice to remain valid across periods we have decreased the
124 possibility of wastefully expiring quota on cpu-local silos that don't need a
125 full slice's amount of cpu time.
127 The interaction between cpu-bound and non-cpu-bound-interactive applications
128 should also be considered, especially when single core usage hits 100%. If you
129 gave each of these applications half of a cpu-core and they both got scheduled
130 on the same CPU it is theoretically possible that the non-cpu bound application
131 will use up to 1ms additional quota in some periods, thereby preventing the
132 cpu-bound application from fully using its quota by that same amount. In these
133 instances it will be up to the CFS algorithm (see sched-design-CFS.rst) to
134 decide which application is chosen to run, as they will both be runnable and
135 have remaining quota. This runtime discrepancy will be made up in the following
136 periods when the interactive application idles.
140 1. Limit a group to 1 CPU worth of runtime.
142 If period is 250ms and quota is also 250ms, the group will get
143 1 CPU worth of runtime every 250ms.
145 # echo 250000 > cpu.cfs_quota_us /* quota = 250ms */
146 # echo 250000 > cpu.cfs_period_us /* period = 250ms */
148 2. Limit a group to 2 CPUs worth of runtime on a multi-CPU machine.
150 With 500ms period and 1000ms quota, the group can get 2 CPUs worth of
153 # echo 1000000 > cpu.cfs_quota_us /* quota = 1000ms */
154 # echo 500000 > cpu.cfs_period_us /* period = 500ms */
156 The larger period here allows for increased burst capacity.
158 3. Limit a group to 20% of 1 CPU.
160 With 50ms period, 10ms quota will be equivalent to 20% of 1 CPU.
162 # echo 10000 > cpu.cfs_quota_us /* quota = 10ms */
163 # echo 50000 > cpu.cfs_period_us /* period = 50ms */
165 By using a small period here we are ensuring a consistent latency
166 response at the expense of burst capacity.