5 cgroup subsys "blkio" implements the block io controller. There seems to be
6 a need of various kinds of IO control policies (like proportional BW, max BW)
7 both at leaf nodes as well as at intermediate nodes in a storage hierarchy.
8 Plan is to use the same cgroup based management interface for blkio controller
9 and based on user options switch IO policies in the background.
11 Currently two IO control policies are implemented. First one is proportional
12 weight time based division of disk policy. It is implemented in CFQ. Hence
13 this policy takes effect only on leaf nodes when CFQ is being used. The second
14 one is throttling policy which can be used to specify upper IO rate limits
15 on devices. This policy is implemented in generic block layer and can be
16 used on leaf nodes as well as higher level logical devices like device mapper.
20 Proportional Weight division of bandwidth
21 -----------------------------------------
22 You can do a very simple testing of running two dd threads in two different
23 cgroups. Here is what you can do.
25 - Enable Block IO controller
28 - Enable group scheduling in CFQ
29 CONFIG_CFQ_GROUP_IOSCHED=y
31 - Compile and boot into kernel and mount IO controller (blkio); see
32 cgroups.txt, Why are cgroups needed?.
34 mount -t tmpfs cgroup_root /sys/fs/cgroup
35 mkdir /sys/fs/cgroup/blkio
36 mount -t cgroup -o blkio none /sys/fs/cgroup/blkio
39 mkdir -p /sys/fs/cgroup/blkio/test1/ /sys/fs/cgroup/blkio/test2
41 - Set weights of group test1 and test2
42 echo 1000 > /sys/fs/cgroup/blkio/test1/blkio.weight
43 echo 500 > /sys/fs/cgroup/blkio/test2/blkio.weight
45 - Create two same size files (say 512MB each) on same disk (file1, file2) and
46 launch two dd threads in different cgroup to read those files.
49 echo 3 > /proc/sys/vm/drop_caches
51 dd if=/mnt/sdb/zerofile1 of=/dev/null &
52 echo $! > /sys/fs/cgroup/blkio/test1/tasks
53 cat /sys/fs/cgroup/blkio/test1/tasks
55 dd if=/mnt/sdb/zerofile2 of=/dev/null &
56 echo $! > /sys/fs/cgroup/blkio/test2/tasks
57 cat /sys/fs/cgroup/blkio/test2/tasks
59 - At macro level, first dd should finish first. To get more precise data, keep
60 on looking at (with the help of script), at blkio.disk_time and
61 blkio.disk_sectors files of both test1 and test2 groups. This will tell how
62 much disk time (in milli seconds), each group got and how many secotors each
63 group dispatched to the disk. We provide fairness in terms of disk time, so
64 ideally io.disk_time of cgroups should be in proportion to the weight.
66 Throttling/Upper Limit policy
67 -----------------------------
68 - Enable Block IO controller
71 - Enable throttling in block layer
72 CONFIG_BLK_DEV_THROTTLING=y
74 - Mount blkio controller (see cgroups.txt, Why are cgroups needed?)
75 mount -t cgroup -o blkio none /sys/fs/cgroup/blkio
77 - Specify a bandwidth rate on particular device for root group. The format
78 for policy is "<major>:<minor> <byes_per_second>".
80 echo "8:16 1048576" > /sys/fs/cgroup/blkio/blkio.read_bps_device
82 Above will put a limit of 1MB/second on reads happening for root group
83 on device having major/minor number 8:16.
85 - Run dd to read a file and see if rate is throttled to 1MB/s or not.
87 # dd if=/mnt/common/zerofile of=/dev/null bs=4K count=1024
91 4194304 bytes (4.2 MB) copied, 4.0001 s, 1.0 MB/s
93 Limits for writes can be put using blkio.write_bps_device file.
97 - Currently none of the IO control policy supports hierarhical groups. But
98 cgroup interface does allow creation of hierarhical cgroups and internally
99 IO policies treat them as flat hierarchy.
101 So this patch will allow creation of cgroup hierarhcy but at the backend
102 everything will be treated as flat. So if somebody created a hierarchy like
111 CFQ and throttling will practically treat all groups at same level.
115 root test1 test2 test3
117 Down the line we can implement hierarchical accounting/control support
118 and also introduce a new cgroup file "use_hierarchy" which will control
119 whether cgroup hierarchy is viewed as flat or hierarchical by the policy..
120 This is how memory controller also has implemented the things.
122 Various user visible config options
123 ===================================
125 - Block IO controller.
127 CONFIG_DEBUG_BLK_CGROUP
128 - Debug help. Right now some additional stats file show up in cgroup
129 if this option is enabled.
131 CONFIG_CFQ_GROUP_IOSCHED
132 - Enables group scheduling in CFQ. Currently only 1 level of group
135 CONFIG_BLK_DEV_THROTTLING
136 - Enable block device throttling support in block layer.
138 Details of cgroup files
139 =======================
140 Proportional weight policy files
141 --------------------------------
143 - Specifies per cgroup weight. This is default weight of the group
144 on all the devices until and unless overridden by per device rule.
145 (See blkio.weight_device).
146 Currently allowed range of weights is from 10 to 1000.
148 - blkio.weight_device
149 - One can specify per cgroup per device rules using this interface.
150 These rules override the default value of group weight as specified
153 Following is the format.
155 # echo dev_maj:dev_minor weight > blkio.weight_device
156 Configure weight=300 on /dev/sdb (8:16) in this cgroup
157 # echo 8:16 300 > blkio.weight_device
158 # cat blkio.weight_device
162 Configure weight=500 on /dev/sda (8:0) in this cgroup
163 # echo 8:0 500 > blkio.weight_device
164 # cat blkio.weight_device
169 Remove specific weight for /dev/sda in this cgroup
170 # echo 8:0 0 > blkio.weight_device
171 # cat blkio.weight_device
176 - disk time allocated to cgroup per device in milliseconds. First
177 two fields specify the major and minor number of the device and
178 third field specifies the disk time allocated to group in
182 - number of sectors transferred to/from disk by the group. First
183 two fields specify the major and minor number of the device and
184 third field specifies the number of sectors transferred by the
185 group to/from the device.
187 - blkio.io_service_bytes
188 - Number of bytes transferred to/from the disk by the group. These
189 are further divided by the type of operation - read or write, sync
190 or async. First two fields specify the major and minor number of the
191 device, third field specifies the operation type and the fourth field
192 specifies the number of bytes.
195 - Number of IOs completed to/from the disk by the group. These
196 are further divided by the type of operation - read or write, sync
197 or async. First two fields specify the major and minor number of the
198 device, third field specifies the operation type and the fourth field
199 specifies the number of IOs.
201 - blkio.io_service_time
202 - Total amount of time between request dispatch and request completion
203 for the IOs done by this cgroup. This is in nanoseconds to make it
204 meaningful for flash devices too. For devices with queue depth of 1,
205 this time represents the actual service time. When queue_depth > 1,
206 that is no longer true as requests may be served out of order. This
207 may cause the service time for a given IO to include the service time
208 of multiple IOs when served out of order which may result in total
209 io_service_time > actual time elapsed. This time is further divided by
210 the type of operation - read or write, sync or async. First two fields
211 specify the major and minor number of the device, third field
212 specifies the operation type and the fourth field specifies the
213 io_service_time in ns.
216 - Total amount of time the IOs for this cgroup spent waiting in the
217 scheduler queues for service. This can be greater than the total time
218 elapsed since it is cumulative io_wait_time for all IOs. It is not a
219 measure of total time the cgroup spent waiting but rather a measure of
220 the wait_time for its individual IOs. For devices with queue_depth > 1
221 this metric does not include the time spent waiting for service once
222 the IO is dispatched to the device but till it actually gets serviced
223 (there might be a time lag here due to re-ordering of requests by the
224 device). This is in nanoseconds to make it meaningful for flash
225 devices too. This time is further divided by the type of operation -
226 read or write, sync or async. First two fields specify the major and
227 minor number of the device, third field specifies the operation type
228 and the fourth field specifies the io_wait_time in ns.
231 - Total number of bios/requests merged into requests belonging to this
232 cgroup. This is further divided by the type of operation - read or
233 write, sync or async.
236 - Total number of requests queued up at any given instant for this
237 cgroup. This is further divided by the type of operation - read or
238 write, sync or async.
240 - blkio.avg_queue_size
241 - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
242 The average queue size for this cgroup over the entire time of this
243 cgroup's existence. Queue size samples are taken each time one of the
244 queues of this cgroup gets a timeslice.
246 - blkio.group_wait_time
247 - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
248 This is the amount of time the cgroup had to wait since it became busy
249 (i.e., went from 0 to 1 request queued) to get a timeslice for one of
250 its queues. This is different from the io_wait_time which is the
251 cumulative total of the amount of time spent by each IO in that cgroup
252 waiting in the scheduler queue. This is in nanoseconds. If this is
253 read when the cgroup is in a waiting (for timeslice) state, the stat
254 will only report the group_wait_time accumulated till the last time it
255 got a timeslice and will not include the current delta.
258 - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
259 This is the amount of time a cgroup spends without any pending
260 requests when not being served, i.e., it does not include any time
261 spent idling for one of the queues of the cgroup. This is in
262 nanoseconds. If this is read when the cgroup is in an empty state,
263 the stat will only report the empty_time accumulated till the last
264 time it had a pending request and will not include the current delta.
267 - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
268 This is the amount of time spent by the IO scheduler idling for a
269 given cgroup in anticipation of a better request than the exising ones
270 from other queues/cgroups. This is in nanoseconds. If this is read
271 when the cgroup is in an idling state, the stat will only report the
272 idle_time accumulated till the last idle period and will not include
276 - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y. This
277 gives the statistics about how many a times a group was dequeued
278 from service tree of the device. First two fields specify the major
279 and minor number of the device and third field specifies the number
280 of times a group was dequeued from a particular device.
282 Throttling/Upper limit policy files
283 -----------------------------------
284 - blkio.throttle.read_bps_device
285 - Specifies upper limit on READ rate from the device. IO rate is
286 specified in bytes per second. Rules are per deivce. Following is
289 echo "<major>:<minor> <rate_bytes_per_second>" > /cgrp/blkio.read_bps_device
291 - blkio.throttle.write_bps_device
292 - Specifies upper limit on WRITE rate to the device. IO rate is
293 specified in bytes per second. Rules are per deivce. Following is
296 echo "<major>:<minor> <rate_bytes_per_second>" > /cgrp/blkio.write_bps_device
298 - blkio.throttle.read_iops_device
299 - Specifies upper limit on READ rate from the device. IO rate is
300 specified in IO per second. Rules are per deivce. Following is
303 echo "<major>:<minor> <rate_io_per_second>" > /cgrp/blkio.read_iops_device
305 - blkio.throttle.write_iops_device
306 - Specifies upper limit on WRITE rate to the device. IO rate is
307 specified in io per second. Rules are per deivce. Following is
310 echo "<major>:<minor> <rate_io_per_second>" > /cgrp/blkio.write_iops_device
312 Note: If both BW and IOPS rules are specified for a device, then IO is
313 subjectd to both the constraints.
315 - blkio.throttle.io_serviced
316 - Number of IOs (bio) completed to/from the disk by the group (as
317 seen by throttling policy). These are further divided by the type
318 of operation - read or write, sync or async. First two fields specify
319 the major and minor number of the device, third field specifies the
320 operation type and the fourth field specifies the number of IOs.
322 blkio.io_serviced does accounting as seen by CFQ and counts are in
323 number of requests (struct request). On the other hand,
324 blkio.throttle.io_serviced counts number of IO in terms of number
325 of bios as seen by throttling policy. These bios can later be
326 merged by elevator and total number of requests completed can be
329 - blkio.throttle.io_service_bytes
330 - Number of bytes transferred to/from the disk by the group. These
331 are further divided by the type of operation - read or write, sync
332 or async. First two fields specify the major and minor number of the
333 device, third field specifies the operation type and the fourth field
334 specifies the number of bytes.
336 These numbers should roughly be same as blkio.io_service_bytes as
337 updated by CFQ. The difference between two is that
338 blkio.io_service_bytes will not be updated if CFQ is not operating
341 Common files among various policies
342 -----------------------------------
344 - Writing an int to this file will result in resetting all the stats
349 /sys/block/<disk>/queue/iosched/slice_idle
350 ------------------------------------------
351 On a faster hardware CFQ can be slow, especially with sequential workload.
352 This happens because CFQ idles on a single queue and single queue might not
353 drive deeper request queue depths to keep the storage busy. In such scenarios
354 one can try setting slice_idle=0 and that would switch CFQ to IOPS
355 (IO operations per second) mode on NCQ supporting hardware.
357 That means CFQ will not idle between cfq queues of a cfq group and hence be
358 able to driver higher queue depth and achieve better throughput. That also
359 means that cfq provides fairness among groups in terms of IOPS and not in
362 /sys/block/<disk>/queue/iosched/group_idle
363 ------------------------------------------
364 If one disables idling on individual cfq queues and cfq service trees by
365 setting slice_idle=0, group_idle kicks in. That means CFQ will still idle
366 on the group in an attempt to provide fairness among groups.
368 By default group_idle is same as slice_idle and does not do anything if
369 slice_idle is enabled.
371 One can experience an overall throughput drop if you have created multiple
372 groups and put applications in that group which are not driving enough
373 IO to keep disk busy. In that case set group_idle=0, and CFQ will not idle
374 on individual groups and throughput should improve.
378 - Currently only sync IO queues are support. All the buffered writes are
379 still system wide and not per group. Hence we will not see service
380 differentiation between buffered writes between groups.