1 Deadline Task Scheduling
2 ------------------------
9 2. Scheduling algorithm
10 3. Scheduling Real-Time Tasks
11 4. Bandwidth management
12 4.1 System-wide settings
16 5.1 SCHED_DEADLINE and cpusets HOWTO
25 Fiddling with these settings can result in an unpredictable or even unstable
26 system behavior. As for -rt (group) scheduling, it is assumed that root users
27 know what they're doing.
33 The SCHED_DEADLINE policy contained inside the sched_dl scheduling class is
34 basically an implementation of the Earliest Deadline First (EDF) scheduling
35 algorithm, augmented with a mechanism (called Constant Bandwidth Server, CBS)
36 that makes it possible to isolate the behavior of tasks between each other.
39 2. Scheduling algorithm
42 SCHED_DEADLINE uses three parameters, named "runtime", "period", and
43 "deadline", to schedule tasks. A SCHED_DEADLINE task should receive
44 "runtime" microseconds of execution time every "period" microseconds, and
45 these "runtime" microseconds are available within "deadline" microseconds
46 from the beginning of the period. In order to implement this behaviour,
47 every time the task wakes up, the scheduler computes a "scheduling deadline"
48 consistent with the guarantee (using the CBS[2,3] algorithm). Tasks are then
49 scheduled using EDF[1] on these scheduling deadlines (the task with the
50 earliest scheduling deadline is selected for execution). Notice that the
51 task actually receives "runtime" time units within "deadline" if a proper
52 "admission control" strategy (see Section "4. Bandwidth management") is used
53 (clearly, if the system is overloaded this guarantee cannot be respected).
55 Summing up, the CBS[2,3] algorithms assigns scheduling deadlines to tasks so
56 that each task runs for at most its runtime every period, avoiding any
57 interference between different tasks (bandwidth isolation), while the EDF[1]
58 algorithm selects the task with the earliest scheduling deadline as the one
59 to be executed next. Thanks to this feature, tasks that do not strictly comply
60 with the "traditional" real-time task model (see Section 3) can effectively
63 In more details, the CBS algorithm assigns scheduling deadlines to
64 tasks in the following way:
66 - Each SCHED_DEADLINE task is characterised by the "runtime",
67 "deadline", and "period" parameters;
69 - The state of the task is described by a "scheduling deadline", and
70 a "remaining runtime". These two parameters are initially set to 0;
72 - When a SCHED_DEADLINE task wakes up (becomes ready for execution),
73 the scheduler checks if
75 remaining runtime runtime
76 ---------------------------------- > ---------
77 scheduling deadline - current time period
79 then, if the scheduling deadline is smaller than the current time, or
80 this condition is verified, the scheduling deadline and the
81 remaining runtime are re-initialised as
83 scheduling deadline = current time + deadline
84 remaining runtime = runtime
86 otherwise, the scheduling deadline and the remaining runtime are
89 - When a SCHED_DEADLINE task executes for an amount of time t, its
90 remaining runtime is decreased as
92 remaining runtime = remaining runtime - t
94 (technically, the runtime is decreased at every tick, or when the
95 task is descheduled / preempted);
97 - When the remaining runtime becomes less or equal than 0, the task is
98 said to be "throttled" (also known as "depleted" in real-time literature)
99 and cannot be scheduled until its scheduling deadline. The "replenishment
100 time" for this task (see next item) is set to be equal to the current
101 value of the scheduling deadline;
103 - When the current time is equal to the replenishment time of a
104 throttled task, the scheduling deadline and the remaining runtime are
107 scheduling deadline = scheduling deadline + period
108 remaining runtime = remaining runtime + runtime
111 3. Scheduling Real-Time Tasks
112 =============================
114 * BIG FAT WARNING ******************************************************
116 * This section contains a (not-thorough) summary on classical deadline
117 * scheduling theory, and how it applies to SCHED_DEADLINE.
118 * The reader can "safely" skip to Section 4 if only interested in seeing
119 * how the scheduling policy can be used. Anyway, we strongly recommend
120 * to come back here and continue reading (once the urge for testing is
121 * satisfied :P) to be sure of fully understanding all technical details.
122 ************************************************************************
124 There are no limitations on what kind of task can exploit this new
125 scheduling discipline, even if it must be said that it is particularly
126 suited for periodic or sporadic real-time tasks that need guarantees on their
127 timing behavior, e.g., multimedia, streaming, control applications, etc.
129 A typical real-time task is composed of a repetition of computation phases
130 (task instances, or jobs) which are activated on a periodic or sporadic
132 Each job J_j (where J_j is the j^th job of the task) is characterised by an
133 arrival time r_j (the time when the job starts), an amount of computation
134 time c_j needed to finish the job, and a job absolute deadline d_j, which
135 is the time within which the job should be finished. The maximum execution
136 time max_j{c_j} is called "Worst Case Execution Time" (WCET) for the task.
137 A real-time task can be periodic with period P if r_{j+1} = r_j + P, or
138 sporadic with minimum inter-arrival time P is r_{j+1} >= r_j + P. Finally,
139 d_j = r_j + D, where D is the task's relative deadline.
140 The utilisation of a real-time task is defined as the ratio between its
141 WCET and its period (or minimum inter-arrival time), and represents
142 the fraction of CPU time needed to execute the task.
144 If the total utilisation sum_i(WCET_i/P_i) is larger than M (with M equal
145 to the number of CPUs), then the scheduler is unable to respect all the
147 Note that total utilisation is defined as the sum of the utilisations
148 WCET_i/P_i over all the real-time tasks in the system. When considering
149 multiple real-time tasks, the parameters of the i-th task are indicated
150 with the "_i" suffix.
151 Moreover, if the total utilisation is larger than M, then we risk starving
152 non- real-time tasks by real-time tasks.
153 If, instead, the total utilisation is smaller than M, then non real-time
154 tasks will not be starved and the system might be able to respect all the
156 As a matter of fact, in this case it is possible to provide an upper bound
157 for tardiness (defined as the maximum between 0 and the difference
158 between the finishing time of a job and its absolute deadline).
159 More precisely, it can be proven that using a global EDF scheduler the
160 maximum tardiness of each task is smaller or equal than
161 ((M − 1) · WCET_max − WCET_min)/(M − (M − 2) · U_max) + WCET_max
162 where WCET_max = max_i{WCET_i} is the maximum WCET, WCET_min=min_i{WCET_i}
163 is the minimum WCET, and U_max = max_i{WCET_i/P_i} is the maximum utilisation.
165 If M=1 (uniprocessor system), or in case of partitioned scheduling (each
166 real-time task is statically assigned to one and only one CPU), it is
167 possible to formally check if all the deadlines are respected.
168 If D_i = P_i for all tasks, then EDF is able to respect all the deadlines
169 of all the tasks executing on a CPU if and only if the total utilisation
170 of the tasks running on such a CPU is smaller or equal than 1.
171 If D_i != P_i for some task, then it is possible to define the density of
172 a task as C_i/min{D_i,T_i}, and EDF is able to respect all the deadlines
173 of all the tasks running on a CPU if the sum sum_i C_i/min{D_i,T_i} of the
174 densities of the tasks running on such a CPU is smaller or equal than 1
175 (notice that this condition is only sufficient, and not necessary).
177 On multiprocessor systems with global EDF scheduling (non partitioned
178 systems), a sufficient test for schedulability can not be based on the
179 utilisations (it can be shown that task sets with utilisations slightly
180 larger than 1 can miss deadlines regardless of the number of CPUs M).
181 However, as previously stated, enforcing that the total utilisation is smaller
182 than M is enough to guarantee that non real-time tasks are not starved and
183 that the tardiness of real-time tasks has an upper bound.
185 SCHED_DEADLINE can be used to schedule real-time tasks guaranteeing that
186 the jobs' deadlines of a task are respected. In order to do this, a task
187 must be scheduled by setting:
193 IOW, if runtime >= WCET and if period is >= P, then the scheduling deadlines
194 and the absolute deadlines (d_j) coincide, so a proper admission control
195 allows to respect the jobs' absolute deadlines for this task (this is what is
196 called "hard schedulability property" and is an extension of Lemma 1 of [2]).
197 Notice that if runtime > deadline the admission control will surely reject
198 this task, as it is not possible to respect its temporal constraints.
201 1 - C. L. Liu and J. W. Layland. Scheduling algorithms for multiprogram-
202 ming in a hard-real-time environment. Journal of the Association for
203 Computing Machinery, 20(1), 1973.
204 2 - L. Abeni , G. Buttazzo. Integrating Multimedia Applications in Hard
205 Real-Time Systems. Proceedings of the 19th IEEE Real-time Systems
206 Symposium, 1998. http://retis.sssup.it/~giorgio/paps/1998/rtss98-cbs.pdf
207 3 - L. Abeni. Server Mechanisms for Multimedia Applications. ReTiS Lab
208 Technical Report. http://disi.unitn.it/~abeni/tr-98-01.pdf
210 4. Bandwidth management
211 =======================
213 As previously mentioned, in order for -deadline scheduling to be
214 effective and useful (that is, to be able to provide "runtime" time units
215 within "deadline"), it is important to have some method to keep the allocation
216 of the available fractions of CPU time to the various tasks under control.
217 This is usually called "admission control" and if it is not performed, then
218 no guarantee can be given on the actual scheduling of the -deadline tasks.
220 As already stated in Section 3, a necessary condition to be respected to
221 correctly schedule a set of real-time tasks is that the total utilisation
222 is smaller than M. When talking about -deadline tasks, this requires that
223 the sum of the ratio between runtime and period for all tasks is smaller
224 than M. Notice that the ratio runtime/period is equivalent to the utilisation
225 of a "traditional" real-time task, and is also often referred to as
227 The interface used to control the CPU bandwidth that can be allocated
228 to -deadline tasks is similar to the one already used for -rt
229 tasks with real-time group scheduling (a.k.a. RT-throttling - see
230 Documentation/scheduler/sched-rt-group.txt), and is based on readable/
231 writable control files located in procfs (for system wide settings).
232 Notice that per-group settings (controlled through cgroupfs) are still not
233 defined for -deadline tasks, because more discussion is needed in order to
234 figure out how we want to manage SCHED_DEADLINE bandwidth at the task group
237 A main difference between deadline bandwidth management and RT-throttling
238 is that -deadline tasks have bandwidth on their own (while -rt ones don't!),
239 and thus we don't need a higher level throttling mechanism to enforce the
240 desired bandwidth. In other words, this means that interface parameters are
241 only used at admission control time (i.e., when the user calls
242 sched_setattr()). Scheduling is then performed considering actual tasks'
243 parameters, so that CPU bandwidth is allocated to SCHED_DEADLINE tasks
244 respecting their needs in terms of granularity. Therefore, using this simple
245 interface we can put a cap on total utilization of -deadline tasks (i.e.,
246 \Sum (runtime_i / period_i) < global_dl_utilization_cap).
248 4.1 System wide settings
249 ------------------------
251 The system wide settings are configured under the /proc virtual file system.
253 For now the -rt knobs are used for -deadline admission control and the
254 -deadline runtime is accounted against the -rt runtime. We realise that this
255 isn't entirely desirable; however, it is better to have a small interface for
256 now, and be able to change it easily later. The ideal situation (see 5.) is to
257 run -rt tasks from a -deadline server; in which case the -rt bandwidth is a
258 direct subset of dl_bw.
260 This means that, for a root_domain comprising M CPUs, -deadline tasks
261 can be created while the sum of their bandwidths stays below:
263 M * (sched_rt_runtime_us / sched_rt_period_us)
265 It is also possible to disable this bandwidth management logic, and
266 be thus free of oversubscribing the system up to any arbitrary level.
267 This is done by writing -1 in /proc/sys/kernel/sched_rt_runtime_us.
273 Specifying a periodic/sporadic task that executes for a given amount of
274 runtime at each instance, and that is scheduled according to the urgency of
275 its own timing constraints needs, in general, a way of declaring:
276 - a (maximum/typical) instance execution time,
277 - a minimum interval between consecutive instances,
278 - a time constraint by which each instance must be completed.
281 * a new struct sched_attr, containing all the necessary fields is
283 * the new scheduling related syscalls that manipulate it, i.e.,
284 sched_setattr() and sched_getattr() are implemented.
288 ---------------------
290 The default value for SCHED_DEADLINE bandwidth is to have rt_runtime equal to
291 950000. With rt_period equal to 1000000, by default, it means that -deadline
292 tasks can use at most 95%, multiplied by the number of CPUs that compose the
293 root_domain, for each root_domain.
294 This means that non -deadline tasks will receive at least 5% of the CPU time,
295 and that -deadline tasks will receive their runtime with a guaranteed
296 worst-case delay respect to the "deadline" parameter. If "deadline" = "period"
297 and the cpuset mechanism is used to implement partitioned scheduling (see
298 Section 5), then this simple setting of the bandwidth management is able to
299 deterministically guarantee that -deadline tasks will receive their runtime
302 Finally, notice that in order not to jeopardize the admission control a
303 -deadline task cannot fork.
305 5. Tasks CPU affinity
306 =====================
308 -deadline tasks cannot have an affinity mask smaller that the entire
309 root_domain they are created on. However, affinities can be specified
310 through the cpuset facility (Documentation/cgroups/cpusets.txt).
312 5.1 SCHED_DEADLINE and cpusets HOWTO
313 ------------------------------------
315 An example of a simple configuration (pin a -deadline task to CPU0)
316 follows (rt-app is used to create a -deadline task).
319 mount -t cgroup -o cpuset cpuset /dev/cpuset
322 echo 0 > cpu0/cpuset.cpus
323 echo 0 > cpu0/cpuset.mems
324 echo 1 > cpuset.cpu_exclusive
325 echo 0 > cpuset.sched_load_balance
326 echo 1 > cpu0/cpuset.cpu_exclusive
327 echo 1 > cpu0/cpuset.mem_exclusive
329 rt-app -t 100000:10000:d:0 -D5 (it is now actually superfluous to specify
337 - refinements to deadline inheritance, especially regarding the possibility
338 of retaining bandwidth isolation among non-interacting tasks. This is
339 being studied from both theoretical and practical points of view, and
340 hopefully we should be able to produce some demonstrative code soon;
341 - (c)group based bandwidth management, and maybe scheduling;
342 - access control for non-root users (and related security concerns to
343 address), which is the best way to allow unprivileged use of the mechanisms
344 and how to prevent non-root users "cheat" the system?
346 As already discussed, we are planning also to merge this work with the EDF
347 throttling patches [https://lkml.org/lkml/2010/2/23/239] but we still are in
348 the preliminary phases of the merge and we really seek feedback that would
349 help us decide on the direction it should take.
351 Appendix A. Test suite
352 ======================
354 The SCHED_DEADLINE policy can be easily tested using two applications that
355 are part of a wider Linux Scheduler validation suite. The suite is
356 available as a GitHub repository: https://github.com/scheduler-tools.
358 The first testing application is called rt-app and can be used to
359 start multiple threads with specific parameters. rt-app supports
360 SCHED_{OTHER,FIFO,RR,DEADLINE} scheduling policies and their related
361 parameters (e.g., niceness, priority, runtime/deadline/period). rt-app
362 is a valuable tool, as it can be used to synthetically recreate certain
363 workloads (maybe mimicking real use-cases) and evaluate how the scheduler
364 behaves under such workloads. In this way, results are easily reproducible.
365 rt-app is available at: https://github.com/scheduler-tools/rt-app.
367 Thread parameters can be specified from the command line, with something like
370 # rt-app -t 100000:10000:d -t 150000:20000:f:10 -D5
372 The above creates 2 threads. The first one, scheduled by SCHED_DEADLINE,
373 executes for 10ms every 100ms. The second one, scheduled at SCHED_FIFO
374 priority 10, executes for 20ms every 150ms. The test will run for a total
377 More interestingly, configurations can be described with a json file that
378 can be passed as input to rt-app with something like this:
380 # rt-app my_config.json
382 The parameters that can be specified with the second method are a superset
383 of the command line options. Please refer to rt-app documentation for more
384 details (<rt-app-sources>/doc/*.json).
386 The second testing application is a modification of schedtool, called
387 schedtool-dl, which can be used to setup SCHED_DEADLINE parameters for a
388 certain pid/application. schedtool-dl is available at:
389 https://github.com/scheduler-tools/schedtool-dl.git.
391 The usage is straightforward:
393 # schedtool -E -t 10000000:100000000 -e ./my_cpuhog_app
395 With this, my_cpuhog_app is put to run inside a SCHED_DEADLINE reservation
396 of 10ms every 100ms (note that parameters are expressed in microseconds).
397 You can also use schedtool to create a reservation for an already running
398 application, given that you know its pid:
400 # schedtool -E -t 10000000:100000000 my_app_pid
402 Appendix B. Minimal main()
403 ==========================
405 We provide in what follows a simple (ugly) self-contained code snippet
406 showing how SCHED_DEADLINE reservations can be created by a real-time
407 application developer.
415 #include <linux/unistd.h>
416 #include <linux/kernel.h>
417 #include <linux/types.h>
418 #include <sys/syscall.h>
421 #define gettid() syscall(__NR_gettid)
423 #define SCHED_DEADLINE 6
425 /* XXX use the proper syscall numbers */
427 #define __NR_sched_setattr 314
428 #define __NR_sched_getattr 315
432 #define __NR_sched_setattr 351
433 #define __NR_sched_getattr 352
437 #define __NR_sched_setattr 380
438 #define __NR_sched_getattr 381
441 static volatile int done;
449 /* SCHED_NORMAL, SCHED_BATCH */
452 /* SCHED_FIFO, SCHED_RR */
453 __u32 sched_priority;
455 /* SCHED_DEADLINE (nsec) */
457 __u64 sched_deadline;
461 int sched_setattr(pid_t pid,
462 const struct sched_attr *attr,
465 return syscall(__NR_sched_setattr, pid, attr, flags);
468 int sched_getattr(pid_t pid,
469 struct sched_attr *attr,
473 return syscall(__NR_sched_getattr, pid, attr, size, flags);
476 void *run_deadline(void *data)
478 struct sched_attr attr;
481 unsigned int flags = 0;
483 printf("deadline thread started [%ld]\n", gettid());
485 attr.size = sizeof(attr);
486 attr.sched_flags = 0;
488 attr.sched_priority = 0;
490 /* This creates a 10ms/30ms reservation */
491 attr.sched_policy = SCHED_DEADLINE;
492 attr.sched_runtime = 10 * 1000 * 1000;
493 attr.sched_period = attr.sched_deadline = 30 * 1000 * 1000;
495 ret = sched_setattr(0, &attr, flags);
498 perror("sched_setattr");
506 printf("deadline thread dies [%ld]\n", gettid());
510 int main (int argc, char **argv)
514 printf("main thread [%ld]\n", gettid());
516 pthread_create(&thread, NULL, run_deadline, NULL);
521 pthread_join(thread, NULL);
523 printf("main dies [%ld]\n", gettid());