4 Written by Paul Menage <menage@google.com> based on
5 Documentation/cgroups/cpusets.txt
7 Original copyright statements from cpusets.txt:
8 Portions Copyright (C) 2004 BULL SA.
9 Portions Copyright (c) 2004-2006 Silicon Graphics, Inc.
10 Modified by Paul Jackson <pj@sgi.com>
11 Modified by Christoph Lameter <clameter@sgi.com>
17 1.1 What are cgroups ?
18 1.2 Why are cgroups needed ?
19 1.3 How are cgroups implemented ?
20 1.4 What does notify_on_release do ?
21 1.5 What does clone_children do ?
22 1.6 How do I use cgroups ?
23 2. Usage Examples and Syntax
25 2.2 Attaching processes
26 2.3 Mounting hierarchies by name
32 4. Extended attributes usage
38 1.1 What are cgroups ?
39 ----------------------
41 Control Groups provide a mechanism for aggregating/partitioning sets of
42 tasks, and all their future children, into hierarchical groups with
43 specialized behaviour.
47 A *cgroup* associates a set of tasks with a set of parameters for one
50 A *subsystem* is a module that makes use of the task grouping
51 facilities provided by cgroups to treat groups of tasks in
52 particular ways. A subsystem is typically a "resource controller" that
53 schedules a resource or applies per-cgroup limits, but it may be
54 anything that wants to act on a group of processes, e.g. a
55 virtualization subsystem.
57 A *hierarchy* is a set of cgroups arranged in a tree, such that
58 every task in the system is in exactly one of the cgroups in the
59 hierarchy, and a set of subsystems; each subsystem has system-specific
60 state attached to each cgroup in the hierarchy. Each hierarchy has
61 an instance of the cgroup virtual filesystem associated with it.
63 At any one time there may be multiple active hierarchies of task
64 cgroups. Each hierarchy is a partition of all tasks in the system.
66 User-level code may create and destroy cgroups by name in an
67 instance of the cgroup virtual file system, specify and query to
68 which cgroup a task is assigned, and list the task PIDs assigned to
69 a cgroup. Those creations and assignments only affect the hierarchy
70 associated with that instance of the cgroup file system.
72 On their own, the only use for cgroups is for simple job
73 tracking. The intention is that other subsystems hook into the generic
74 cgroup support to provide new attributes for cgroups, such as
75 accounting/limiting the resources which processes in a cgroup can
76 access. For example, cpusets (see Documentation/cgroups/cpusets.txt) allow
77 you to associate a set of CPUs and a set of memory nodes with the
80 1.2 Why are cgroups needed ?
81 ----------------------------
83 There are multiple efforts to provide process aggregations in the
84 Linux kernel, mainly for resource-tracking purposes. Such efforts
85 include cpusets, CKRM/ResGroups, UserBeanCounters, and virtual server
86 namespaces. These all require the basic notion of a
87 grouping/partitioning of processes, with newly forked processes ending
88 up in the same group (cgroup) as their parent process.
90 The kernel cgroup patch provides the minimum essential kernel
91 mechanisms required to efficiently implement such groups. It has
92 minimal impact on the system fast paths, and provides hooks for
93 specific subsystems such as cpusets to provide additional behaviour as
96 Multiple hierarchy support is provided to allow for situations where
97 the division of tasks into cgroups is distinctly different for
98 different subsystems - having parallel hierarchies allows each
99 hierarchy to be a natural division of tasks, without having to handle
100 complex combinations of tasks that would be present if several
101 unrelated subsystems needed to be forced into the same tree of
104 At one extreme, each resource controller or subsystem could be in a
105 separate hierarchy; at the other extreme, all subsystems
106 would be attached to the same hierarchy.
108 As an example of a scenario (originally proposed by vatsa@in.ibm.com)
109 that can benefit from multiple hierarchies, consider a large
110 university server with various users - students, professors, system
111 tasks etc. The resource planning for this server could be along the
118 (Professors) (Students)
120 In addition (system tasks) are attached to topcpuset (so
121 that they can run anywhere) with a limit of 20%
123 Memory : Professors (50%), Students (30%), system (20%)
125 Disk : Professors (50%), Students (30%), system (20%)
127 Network : WWW browsing (20%), Network File System (60%), others (20%)
129 Professors (15%) students (5%)
131 Browsers like Firefox/Lynx go into the WWW network class, while (k)nfsd goes
132 into the NFS network class.
134 At the same time Firefox/Lynx will share an appropriate CPU/Memory class
135 depending on who launched it (prof/student).
137 With the ability to classify tasks differently for different resources
138 (by putting those resource subsystems in different hierarchies),
139 the admin can easily set up a script which receives exec notifications
140 and depending on who is launching the browser he can
142 # echo browser_pid > /sys/fs/cgroup/<restype>/<userclass>/tasks
144 With only a single hierarchy, he now would potentially have to create
145 a separate cgroup for every browser launched and associate it with
146 appropriate network and other resource class. This may lead to
147 proliferation of such cgroups.
149 Also let's say that the administrator would like to give enhanced network
150 access temporarily to a student's browser (since it is night and the user
151 wants to do online gaming :)) OR give one of the student's simulation
152 apps enhanced CPU power.
154 With ability to write PIDs directly to resource classes, it's just a
157 # echo pid > /sys/fs/cgroup/network/<new_class>/tasks
159 # echo pid > /sys/fs/cgroup/network/<orig_class>/tasks
161 Without this ability, the administrator would have to split the cgroup into
162 multiple separate ones and then associate the new cgroups with the
163 new resource classes.
167 1.3 How are cgroups implemented ?
168 ---------------------------------
170 Control Groups extends the kernel as follows:
172 - Each task in the system has a reference-counted pointer to a
175 - A css_set contains a set of reference-counted pointers to
176 cgroup_subsys_state objects, one for each cgroup subsystem
177 registered in the system. There is no direct link from a task to
178 the cgroup of which it's a member in each hierarchy, but this
179 can be determined by following pointers through the
180 cgroup_subsys_state objects. This is because accessing the
181 subsystem state is something that's expected to happen frequently
182 and in performance-critical code, whereas operations that require a
183 task's actual cgroup assignments (in particular, moving between
184 cgroups) are less common. A linked list runs through the cg_list
185 field of each task_struct using the css_set, anchored at
188 - A cgroup hierarchy filesystem can be mounted for browsing and
189 manipulation from user space.
191 - You can list all the tasks (by PID) attached to any cgroup.
193 The implementation of cgroups requires a few, simple hooks
194 into the rest of the kernel, none in performance-critical paths:
196 - in init/main.c, to initialize the root cgroups and initial
197 css_set at system boot.
199 - in fork and exit, to attach and detach a task from its css_set.
201 In addition, a new file system of type "cgroup" may be mounted, to
202 enable browsing and modifying the cgroups presently known to the
203 kernel. When mounting a cgroup hierarchy, you may specify a
204 comma-separated list of subsystems to mount as the filesystem mount
205 options. By default, mounting the cgroup filesystem attempts to
206 mount a hierarchy containing all registered subsystems.
208 If an active hierarchy with exactly the same set of subsystems already
209 exists, it will be reused for the new mount. If no existing hierarchy
210 matches, and any of the requested subsystems are in use in an existing
211 hierarchy, the mount will fail with -EBUSY. Otherwise, a new hierarchy
212 is activated, associated with the requested subsystems.
214 It's not currently possible to bind a new subsystem to an active
215 cgroup hierarchy, or to unbind a subsystem from an active cgroup
216 hierarchy. This may be possible in future, but is fraught with nasty
217 error-recovery issues.
219 When a cgroup filesystem is unmounted, if there are any
220 child cgroups created below the top-level cgroup, that hierarchy
221 will remain active even though unmounted; if there are no
222 child cgroups then the hierarchy will be deactivated.
224 No new system calls are added for cgroups - all support for
225 querying and modifying cgroups is via this cgroup file system.
227 Each task under /proc has an added file named 'cgroup' displaying,
228 for each active hierarchy, the subsystem names and the cgroup name
229 as the path relative to the root of the cgroup file system.
231 Each cgroup is represented by a directory in the cgroup file system
232 containing the following files describing that cgroup:
234 - tasks: list of tasks (by PID) attached to that cgroup. This list
235 is not guaranteed to be sorted. Writing a thread ID into this file
236 moves the thread into this cgroup.
237 - cgroup.procs: list of thread group IDs in the cgroup. This list is
238 not guaranteed to be sorted or free of duplicate TGIDs, and userspace
239 should sort/uniquify the list if this property is required.
240 Writing a thread group ID into this file moves all threads in that
241 group into this cgroup.
242 - notify_on_release flag: run the release agent on exit?
243 - release_agent: the path to use for release notifications (this file
244 exists in the top cgroup only)
246 Other subsystems such as cpusets may add additional files in each
249 New cgroups are created using the mkdir system call or shell
250 command. The properties of a cgroup, such as its flags, are
251 modified by writing to the appropriate file in that cgroups
252 directory, as listed above.
254 The named hierarchical structure of nested cgroups allows partitioning
255 a large system into nested, dynamically changeable, "soft-partitions".
257 The attachment of each task, automatically inherited at fork by any
258 children of that task, to a cgroup allows organizing the work load
259 on a system into related sets of tasks. A task may be re-attached to
260 any other cgroup, if allowed by the permissions on the necessary
261 cgroup file system directories.
263 When a task is moved from one cgroup to another, it gets a new
264 css_set pointer - if there's an already existing css_set with the
265 desired collection of cgroups then that group is reused, otherwise a new
266 css_set is allocated. The appropriate existing css_set is located by
267 looking into a hash table.
269 To allow access from a cgroup to the css_sets (and hence tasks)
270 that comprise it, a set of cg_cgroup_link objects form a lattice;
271 each cg_cgroup_link is linked into a list of cg_cgroup_links for
272 a single cgroup on its cgrp_link_list field, and a list of
273 cg_cgroup_links for a single css_set on its cg_link_list.
275 Thus the set of tasks in a cgroup can be listed by iterating over
276 each css_set that references the cgroup, and sub-iterating over
277 each css_set's task set.
279 The use of a Linux virtual file system (vfs) to represent the
280 cgroup hierarchy provides for a familiar permission and name space
281 for cgroups, with a minimum of additional kernel code.
283 1.4 What does notify_on_release do ?
284 ------------------------------------
286 If the notify_on_release flag is enabled (1) in a cgroup, then
287 whenever the last task in the cgroup leaves (exits or attaches to
288 some other cgroup) and the last child cgroup of that cgroup
289 is removed, then the kernel runs the command specified by the contents
290 of the "release_agent" file in that hierarchy's root directory,
291 supplying the pathname (relative to the mount point of the cgroup
292 file system) of the abandoned cgroup. This enables automatic
293 removal of abandoned cgroups. The default value of
294 notify_on_release in the root cgroup at system boot is disabled
295 (0). The default value of other cgroups at creation is the current
296 value of their parents' notify_on_release settings. The default value of
297 a cgroup hierarchy's release_agent path is empty.
299 1.5 What does clone_children do ?
300 ---------------------------------
302 This flag only affects the cpuset controller. If the clone_children
303 flag is enabled (1) in a cgroup, a new cpuset cgroup will copy its
304 configuration from the parent during initialization.
306 1.6 How do I use cgroups ?
307 --------------------------
309 To start a new job that is to be contained within a cgroup, using
310 the "cpuset" cgroup subsystem, the steps are something like:
312 1) mount -t tmpfs cgroup_root /sys/fs/cgroup
313 2) mkdir /sys/fs/cgroup/cpuset
314 3) mount -t cgroup -ocpuset cpuset /sys/fs/cgroup/cpuset
315 4) Create the new cgroup by doing mkdir's and write's (or echo's) in
316 the /sys/fs/cgroup virtual file system.
317 5) Start a task that will be the "founding father" of the new job.
318 6) Attach that task to the new cgroup by writing its PID to the
319 /sys/fs/cgroup/cpuset/tasks file for that cgroup.
320 7) fork, exec or clone the job tasks from this founding father task.
322 For example, the following sequence of commands will setup a cgroup
323 named "Charlie", containing just CPUs 2 and 3, and Memory Node 1,
324 and then start a subshell 'sh' in that cgroup:
326 mount -t tmpfs cgroup_root /sys/fs/cgroup
327 mkdir /sys/fs/cgroup/cpuset
328 mount -t cgroup cpuset -ocpuset /sys/fs/cgroup/cpuset
329 cd /sys/fs/cgroup/cpuset
332 /bin/echo 2-3 > cpuset.cpus
333 /bin/echo 1 > cpuset.mems
336 # The subshell 'sh' is now running in cgroup Charlie
337 # The next line should display '/Charlie'
338 cat /proc/self/cgroup
340 2. Usage Examples and Syntax
341 ============================
346 Creating, modifying, using cgroups can be done through the cgroup
349 To mount a cgroup hierarchy with all available subsystems, type:
350 # mount -t cgroup xxx /sys/fs/cgroup
352 The "xxx" is not interpreted by the cgroup code, but will appear in
353 /proc/mounts so may be any useful identifying string that you like.
355 Note: Some subsystems do not work without some user input first. For instance,
356 if cpusets are enabled the user will have to populate the cpus and mems files
357 for each new cgroup created before that group can be used.
359 As explained in section `1.2 Why are cgroups needed?' you should create
360 different hierarchies of cgroups for each single resource or group of
361 resources you want to control. Therefore, you should mount a tmpfs on
362 /sys/fs/cgroup and create directories for each cgroup resource or resource
365 # mount -t tmpfs cgroup_root /sys/fs/cgroup
366 # mkdir /sys/fs/cgroup/rg1
368 To mount a cgroup hierarchy with just the cpuset and memory
370 # mount -t cgroup -o cpuset,memory hier1 /sys/fs/cgroup/rg1
372 While remounting cgroups is currently supported, it is not recommend
373 to use it. Remounting allows changing bound subsystems and
374 release_agent. Rebinding is hardly useful as it only works when the
375 hierarchy is empty and release_agent itself should be replaced with
376 conventional fsnotify. The support for remounting will be removed in
379 To Specify a hierarchy's release_agent:
380 # mount -t cgroup -o cpuset,release_agent="/sbin/cpuset_release_agent" \
381 xxx /sys/fs/cgroup/rg1
383 Note that specifying 'release_agent' more than once will return failure.
385 Note that changing the set of subsystems is currently only supported
386 when the hierarchy consists of a single (root) cgroup. Supporting
387 the ability to arbitrarily bind/unbind subsystems from an existing
388 cgroup hierarchy is intended to be implemented in the future.
390 Then under /sys/fs/cgroup/rg1 you can find a tree that corresponds to the
391 tree of the cgroups in the system. For instance, /sys/fs/cgroup/rg1
392 is the cgroup that holds the whole system.
394 If you want to change the value of release_agent:
395 # echo "/sbin/new_release_agent" > /sys/fs/cgroup/rg1/release_agent
397 It can also be changed via remount.
399 If you want to create a new cgroup under /sys/fs/cgroup/rg1:
400 # cd /sys/fs/cgroup/rg1
403 Now you want to do something with this cgroup.
406 In this directory you can find several files:
408 cgroup.procs notify_on_release tasks
409 (plus whatever files added by the attached subsystems)
411 Now attach your shell to this cgroup:
412 # /bin/echo $$ > tasks
414 You can also create cgroups inside your cgroup by using mkdir in this
418 To remove a cgroup, just use rmdir:
421 This will fail if the cgroup is in use (has cgroups inside, or
422 has processes attached, or is held alive by other subsystem-specific
425 2.2 Attaching processes
426 -----------------------
428 # /bin/echo PID > tasks
430 Note that it is PID, not PIDs. You can only attach ONE task at a time.
431 If you have several tasks to attach, you have to do it one after another:
433 # /bin/echo PID1 > tasks
434 # /bin/echo PID2 > tasks
436 # /bin/echo PIDn > tasks
438 You can attach the current shell task by echoing 0:
442 You can use the cgroup.procs file instead of the tasks file to move all
443 threads in a threadgroup at once. Echoing the PID of any task in a
444 threadgroup to cgroup.procs causes all tasks in that threadgroup to be
445 be attached to the cgroup. Writing 0 to cgroup.procs moves all tasks
446 in the writing task's threadgroup.
448 Note: Since every task is always a member of exactly one cgroup in each
449 mounted hierarchy, to remove a task from its current cgroup you must
450 move it into a new cgroup (possibly the root cgroup) by writing to the
451 new cgroup's tasks file.
453 Note: Due to some restrictions enforced by some cgroup subsystems, moving
454 a process to another cgroup can fail.
456 2.3 Mounting hierarchies by name
457 --------------------------------
459 Passing the name=<x> option when mounting a cgroups hierarchy
460 associates the given name with the hierarchy. This can be used when
461 mounting a pre-existing hierarchy, in order to refer to it by name
462 rather than by its set of active subsystems. Each hierarchy is either
463 nameless, or has a unique name.
465 The name should match [\w.-]+
467 When passing a name=<x> option for a new hierarchy, you need to
468 specify subsystems manually; the legacy behaviour of mounting all
469 subsystems when none are explicitly specified is not supported when
470 you give a subsystem a name.
472 The name of the subsystem appears as part of the hierarchy description
473 in /proc/mounts and /proc/<pid>/cgroups.
478 There is mechanism which allows to get notifications about changing
481 To register a new notification handler you need to:
482 - create a file descriptor for event notification using eventfd(2);
483 - open a control file to be monitored (e.g. memory.usage_in_bytes);
484 - write "<event_fd> <control_fd> <args>" to cgroup.event_control.
485 Interpretation of args is defined by control file implementation;
487 eventfd will be woken up by control file implementation or when the
490 To unregister a notification handler just close eventfd.
492 NOTE: Support of notifications should be implemented for the control
493 file. See documentation for the subsystem.
501 Each kernel subsystem that wants to hook into the generic cgroup
502 system needs to create a cgroup_subsys object. This contains
503 various methods, which are callbacks from the cgroup system, along
504 with a subsystem ID which will be assigned by the cgroup system.
506 Other fields in the cgroup_subsys object include:
508 - subsys_id: a unique array index for the subsystem, indicating which
509 entry in cgroup->subsys[] this subsystem should be managing.
511 - name: should be initialized to a unique subsystem name. Should be
512 no longer than MAX_CGROUP_TYPE_NAMELEN.
514 - early_init: indicate if the subsystem needs early initialization
517 Each cgroup object created by the system has an array of pointers,
518 indexed by subsystem ID; this pointer is entirely managed by the
519 subsystem; the generic cgroup code will never touch this pointer.
524 There is a global mutex, cgroup_mutex, used by the cgroup
525 system. This should be taken by anything that wants to modify a
526 cgroup. It may also be taken to prevent cgroups from being
527 modified, but more specific locks may be more appropriate in that
530 See kernel/cgroup.c for more details.
532 Subsystems can take/release the cgroup_mutex via the functions
533 cgroup_lock()/cgroup_unlock().
535 Accessing a task's cgroup pointer may be done in the following ways:
536 - while holding cgroup_mutex
537 - while holding the task's alloc_lock (via task_lock())
538 - inside an rcu_read_lock() section via rcu_dereference()
543 Each subsystem should:
545 - add an entry in linux/cgroup_subsys.h
546 - define a cgroup_subsys object called <name>_subsys
548 If a subsystem can be compiled as a module, it should also have in its
549 module initcall a call to cgroup_load_subsys(), and in its exitcall a
550 call to cgroup_unload_subsys(). It should also set its_subsys.module =
551 THIS_MODULE in its .c file.
553 Each subsystem may export the following methods. The only mandatory
554 methods are css_alloc/free. Any others that are null are presumed to
555 be successful no-ops.
557 struct cgroup_subsys_state *css_alloc(struct cgroup *cgrp)
558 (cgroup_mutex held by caller)
560 Called to allocate a subsystem state object for a cgroup. The
561 subsystem should allocate its subsystem state object for the passed
562 cgroup, returning a pointer to the new object on success or a
563 ERR_PTR() value. On success, the subsystem pointer should point to
564 a structure of type cgroup_subsys_state (typically embedded in a
565 larger subsystem-specific object), which will be initialized by the
566 cgroup system. Note that this will be called at initialization to
567 create the root subsystem state for this subsystem; this case can be
568 identified by the passed cgroup object having a NULL parent (since
569 it's the root of the hierarchy) and may be an appropriate place for
572 int css_online(struct cgroup *cgrp)
573 (cgroup_mutex held by caller)
575 Called after @cgrp successfully completed all allocations and made
576 visible to cgroup_for_each_child/descendant_*() iterators. The
577 subsystem may choose to fail creation by returning -errno. This
578 callback can be used to implement reliable state sharing and
579 propagation along the hierarchy. See the comment on
580 cgroup_for_each_descendant_pre() for details.
582 void css_offline(struct cgroup *cgrp);
584 This is the counterpart of css_online() and called iff css_online()
585 has succeeded on @cgrp. This signifies the beginning of the end of
586 @cgrp. @cgrp is being removed and the subsystem should start dropping
587 all references it's holding on @cgrp. When all references are dropped,
588 cgroup removal will proceed to the next step - css_free(). After this
589 callback, @cgrp should be considered dead to the subsystem.
591 void css_free(struct cgroup *cgrp)
592 (cgroup_mutex held by caller)
594 The cgroup system is about to free @cgrp; the subsystem should free
595 its subsystem state object. By the time this method is called, @cgrp
596 is completely unused; @cgrp->parent is still valid. (Note - can also
597 be called for a newly-created cgroup if an error occurs after this
598 subsystem's create() method has been called for the new cgroup).
600 int can_attach(struct cgroup *cgrp, struct cgroup_taskset *tset)
601 (cgroup_mutex held by caller)
603 Called prior to moving one or more tasks into a cgroup; if the
604 subsystem returns an error, this will abort the attach operation.
605 @tset contains the tasks to be attached and is guaranteed to have at
606 least one task in it.
608 If there are multiple tasks in the taskset, then:
609 - it's guaranteed that all are from the same thread group
610 - @tset contains all tasks from the thread group whether or not
611 they're switching cgroups
612 - the first task is the leader
614 Each @tset entry also contains the task's old cgroup and tasks which
615 aren't switching cgroup can be skipped easily using the
616 cgroup_taskset_for_each() iterator. Note that this isn't called on a
617 fork. If this method returns 0 (success) then this should remain valid
618 while the caller holds cgroup_mutex and it is ensured that either
619 attach() or cancel_attach() will be called in future.
621 void cancel_attach(struct cgroup *cgrp, struct cgroup_taskset *tset)
622 (cgroup_mutex held by caller)
624 Called when a task attach operation has failed after can_attach() has succeeded.
625 A subsystem whose can_attach() has some side-effects should provide this
626 function, so that the subsystem can implement a rollback. If not, not necessary.
627 This will be called only about subsystems whose can_attach() operation have
628 succeeded. The parameters are identical to can_attach().
630 void attach(struct cgroup *cgrp, struct cgroup_taskset *tset)
631 (cgroup_mutex held by caller)
633 Called after the task has been attached to the cgroup, to allow any
634 post-attachment activity that requires memory allocations or blocking.
635 The parameters are identical to can_attach().
637 void fork(struct task_struct *task)
639 Called when a task is forked into a cgroup.
641 void exit(struct task_struct *task)
643 Called during task exit.
645 void bind(struct cgroup *root)
646 (cgroup_mutex held by caller)
648 Called when a cgroup subsystem is rebound to a different hierarchy
649 and root cgroup. Currently this will only involve movement between
650 the default hierarchy (which never has sub-cgroups) and a hierarchy
651 that is being created/destroyed (and hence has no sub-cgroups).
653 4. Extended attribute usage
654 ===========================
656 cgroup filesystem supports certain types of extended attributes in its
657 directories and files. The current supported types are:
658 - Trusted (XATTR_TRUSTED)
659 - Security (XATTR_SECURITY)
661 Both require CAP_SYS_ADMIN capability to set.
663 Like in tmpfs, the extended attributes in cgroup filesystem are stored
664 using kernel memory and it's advised to keep the usage at minimum. This
665 is the reason why user defined extended attributes are not supported, since
666 any user can do it and there's no limit in the value size.
668 The current known users for this feature are SELinux to limit cgroup usage
669 in containers and systemd for assorted meta data like main PID in a cgroup
670 (systemd creates a cgroup per service).
675 Q: what's up with this '/bin/echo' ?
676 A: bash's builtin 'echo' command does not check calls to write() against
677 errors. If you use it in the cgroup file system, you won't be
678 able to tell whether a command succeeded or failed.
680 Q: When I attach processes, only the first of the line gets really attached !
681 A: We can only return one error code per call to write(). So you should also