1 .. SPDX-License-Identifier: GPL-2.0
2 .. include:: <isonum.txt>
4 ===========================================
5 User Interface for Resource Control feature
6 ===========================================
8 :Copyright: |copy| 2016 Intel Corporation
9 :Authors: - Fenghua Yu <fenghua.yu@intel.com>
10 - Tony Luck <tony.luck@intel.com>
11 - Vikas Shivappa <vikas.shivappa@intel.com>
14 Intel refers to this feature as Intel Resource Director Technology(Intel(R) RDT).
15 AMD refers to this feature as AMD Platform Quality of Service(AMD QoS).
17 This feature is enabled by the CONFIG_X86_CPU_RESCTRL and the x86 /proc/cpuinfo
20 =============================================== ================================
21 RDT (Resource Director Technology) Allocation "rdt_a"
22 CAT (Cache Allocation Technology) "cat_l3", "cat_l2"
23 CDP (Code and Data Prioritization) "cdp_l3", "cdp_l2"
24 CQM (Cache QoS Monitoring) "cqm_llc", "cqm_occup_llc"
25 MBM (Memory Bandwidth Monitoring) "cqm_mbm_total", "cqm_mbm_local"
26 MBA (Memory Bandwidth Allocation) "mba"
27 SMBA (Slow Memory Bandwidth Allocation) ""
28 BMEC (Bandwidth Monitoring Event Configuration) ""
29 =============================================== ================================
31 Historically, new features were made visible by default in /proc/cpuinfo. This
32 resulted in the feature flags becoming hard to parse by humans. Adding a new
33 flag to /proc/cpuinfo should be avoided if user space can obtain information
34 about the feature from resctrl's info directory.
36 To use the feature mount the file system::
38 # mount -t resctrl resctrl [-o cdp[,cdpl2][,mba_MBps][,debug]] /sys/fs/resctrl
43 Enable code/data prioritization in L3 cache allocations.
45 Enable code/data prioritization in L2 cache allocations.
47 Enable the MBA Software Controller(mba_sc) to specify MBA
50 Make debug files accessible. Available debug files are annotated with
51 "Available only with debug option".
53 L2 and L3 CDP are controlled separately.
55 RDT features are orthogonal. A particular system may support only
56 monitoring, only control, or both monitoring and control. Cache
57 pseudo-locking is a unique way of using cache control to "pin" or
58 "lock" data in the cache. Details can be found in
59 "Cache Pseudo-Locking".
62 The mount succeeds if either of allocation or monitoring is present, but
63 only those files and directories supported by the system will be created.
64 For more details on the behavior of the interface during monitoring
65 and allocation, see the "Resource alloc and monitor groups" section.
70 The 'info' directory contains information about the enabled
71 resources. Each resource has its own subdirectory. The subdirectory
72 names reflect the resource names.
74 Each subdirectory contains the following files with respect to
77 Cache resource(L3/L2) subdirectory contains the following files
78 related to allocation:
81 The number of CLOSIDs which are valid for this
82 resource. The kernel uses the smallest number of
83 CLOSIDs of all enabled resources as limit.
85 The bitmask which is valid for this resource.
86 This mask is equivalent to 100%.
88 The minimum number of consecutive bits which
89 must be set when writing a mask.
92 Bitmask of shareable resource with other executing
93 entities (e.g. I/O). User can use this when
94 setting up exclusive cache partitions. Note that
95 some platforms support devices that have their
96 own settings for cache use which can over-ride
99 Annotated capacity bitmasks showing how all
100 instances of the resource are used. The legend is:
103 Corresponding region is unused. When the system's
104 resources have been allocated and a "0" is found
105 in "bit_usage" it is a sign that resources are
109 Corresponding region is used by hardware only
110 but available for software use. If a resource
111 has bits set in "shareable_bits" but not all
112 of these bits appear in the resource groups'
113 schematas then the bits appearing in
114 "shareable_bits" but no resource group will
117 Corresponding region is available for sharing and
118 used by hardware and software. These are the
119 bits that appear in "shareable_bits" as
120 well as a resource group's allocation.
122 Corresponding region is used by software
123 and available for sharing.
125 Corresponding region is used exclusively by
126 one resource group. No sharing allowed.
128 Corresponding region is pseudo-locked. No
131 Indicates if non-contiguous 1s value in CBM is supported.
134 Only contiguous 1s value in CBM is supported.
136 Non-contiguous 1s value in CBM is supported.
138 Memory bandwidth(MB) subdirectory contains the following files
139 with respect to allocation:
142 The minimum memory bandwidth percentage which
146 The granularity in which the memory bandwidth
147 percentage is allocated. The allocated
148 b/w percentage is rounded off to the next
149 control step available on the hardware. The
150 available bandwidth control steps are:
151 min_bandwidth + N * bandwidth_gran.
154 Indicates if the delay scale is linear or
155 non-linear. This field is purely informational
158 "thread_throttle_mode":
159 Indicator on Intel systems of how tasks running on threads
160 of a physical core are throttled in cases where they
161 request different memory bandwidth percentages:
164 the smallest percentage is applied
167 bandwidth percentages are directly applied to
168 the threads running on the core
170 If RDT monitoring is available there will be an "L3_MON" directory
171 with the following files:
174 The number of RMIDs available. This is the
175 upper bound for how many "CTRL_MON" + "MON"
176 groups can be created.
179 Lists the monitoring events if
180 monitoring is enabled for the resource.
183 # cat /sys/fs/resctrl/info/L3_MON/mon_features
188 If the system supports Bandwidth Monitoring Event
189 Configuration (BMEC), then the bandwidth events will
190 be configurable. The output will be::
192 # cat /sys/fs/resctrl/info/L3_MON/mon_features
195 mbm_total_bytes_config
197 mbm_local_bytes_config
199 "mbm_total_bytes_config", "mbm_local_bytes_config":
200 Read/write files containing the configuration for the mbm_total_bytes
201 and mbm_local_bytes events, respectively, when the Bandwidth
202 Monitoring Event Configuration (BMEC) feature is supported.
203 The event configuration settings are domain specific and affect
204 all the CPUs in the domain. When either event configuration is
205 changed, the bandwidth counters for all RMIDs of both events
206 (mbm_total_bytes as well as mbm_local_bytes) are cleared for that
207 domain. The next read for every RMID will report "Unavailable"
208 and subsequent reads will report the valid value.
210 Following are the types of events supported:
212 ==== ========================================================
214 ==== ========================================================
215 6 Dirty Victims from the QOS domain to all types of memory
216 5 Reads to slow memory in the non-local NUMA domain
217 4 Reads to slow memory in the local NUMA domain
218 3 Non-temporal writes to non-local NUMA domain
219 2 Non-temporal writes to local NUMA domain
220 1 Reads to memory in the non-local NUMA domain
221 0 Reads to memory in the local NUMA domain
222 ==== ========================================================
224 By default, the mbm_total_bytes configuration is set to 0x7f to count
225 all the event types and the mbm_local_bytes configuration is set to
226 0x15 to count all the local memory events.
230 * To view the current configuration::
233 # cat /sys/fs/resctrl/info/L3_MON/mbm_total_bytes_config
234 0=0x7f;1=0x7f;2=0x7f;3=0x7f
236 # cat /sys/fs/resctrl/info/L3_MON/mbm_local_bytes_config
237 0=0x15;1=0x15;3=0x15;4=0x15
239 * To change the mbm_total_bytes to count only reads on domain 0,
240 the bits 0, 1, 4 and 5 needs to be set, which is 110011b in binary
241 (in hexadecimal 0x33):
244 # echo "0=0x33" > /sys/fs/resctrl/info/L3_MON/mbm_total_bytes_config
246 # cat /sys/fs/resctrl/info/L3_MON/mbm_total_bytes_config
247 0=0x33;1=0x7f;2=0x7f;3=0x7f
249 * To change the mbm_local_bytes to count all the slow memory reads on
250 domain 0 and 1, the bits 4 and 5 needs to be set, which is 110000b
251 in binary (in hexadecimal 0x30):
254 # echo "0=0x30;1=0x30" > /sys/fs/resctrl/info/L3_MON/mbm_local_bytes_config
256 # cat /sys/fs/resctrl/info/L3_MON/mbm_local_bytes_config
257 0=0x30;1=0x30;3=0x15;4=0x15
259 "max_threshold_occupancy":
260 Read/write file provides the largest value (in
261 bytes) at which a previously used LLC_occupancy
262 counter can be considered for re-use.
264 Finally, in the top level of the "info" directory there is a file
265 named "last_cmd_status". This is reset with every "command" issued
266 via the file system (making new directories or writing to any of the
267 control files). If the command was successful, it will read as "ok".
268 If the command failed, it will provide more information that can be
269 conveyed in the error returns from file operations. E.g.
272 # echo L3:0=f7 > schemata
273 bash: echo: write error: Invalid argument
274 # cat info/last_cmd_status
275 mask f7 has non-consecutive 1-bits
277 Resource alloc and monitor groups
278 =================================
280 Resource groups are represented as directories in the resctrl file
281 system. The default group is the root directory which, immediately
282 after mounting, owns all the tasks and cpus in the system and can make
283 full use of all resources.
285 On a system with RDT control features additional directories can be
286 created in the root directory that specify different amounts of each
287 resource (see "schemata" below). The root and these additional top level
288 directories are referred to as "CTRL_MON" groups below.
290 On a system with RDT monitoring the root directory and other top level
291 directories contain a directory named "mon_groups" in which additional
292 directories can be created to monitor subsets of tasks in the CTRL_MON
293 group that is their ancestor. These are called "MON" groups in the rest
296 Removing a directory will move all tasks and cpus owned by the group it
297 represents to the parent. Removing one of the created CTRL_MON groups
298 will automatically remove all MON groups below it.
300 Moving MON group directories to a new parent CTRL_MON group is supported
301 for the purpose of changing the resource allocations of a MON group
302 without impacting its monitoring data or assigned tasks. This operation
303 is not allowed for MON groups which monitor CPUs. No other move
304 operation is currently allowed other than simply renaming a CTRL_MON or
307 All groups contain the following files:
310 Reading this file shows the list of all tasks that belong to
311 this group. Writing a task id to the file will add a task to the
312 group. Multiple tasks can be added by separating the task ids
313 with commas. Tasks will be assigned sequentially. Multiple
314 failures are not supported. A single failure encountered while
315 attempting to assign a task will cause the operation to abort and
316 already added tasks before the failure will remain in the group.
317 Failures will be logged to /sys/fs/resctrl/info/last_cmd_status.
319 If the group is a CTRL_MON group the task is removed from
320 whichever previous CTRL_MON group owned the task and also from
321 any MON group that owned the task. If the group is a MON group,
322 then the task must already belong to the CTRL_MON parent of this
323 group. The task is removed from any previous MON group.
327 Reading this file shows a bitmask of the logical CPUs owned by
328 this group. Writing a mask to this file will add and remove
329 CPUs to/from this group. As with the tasks file a hierarchy is
330 maintained where MON groups may only include CPUs owned by the
331 parent CTRL_MON group.
332 When the resource group is in pseudo-locked mode this file will
333 only be readable, reflecting the CPUs associated with the
334 pseudo-locked region.
338 Just like "cpus", only using ranges of CPUs instead of bitmasks.
341 When control is enabled all CTRL_MON groups will also contain:
344 A list of all the resources available to this group.
345 Each resource has its own line and format - see below for details.
348 Mirrors the display of the "schemata" file to display the size in
349 bytes of each allocation instead of the bits representing the
353 The "mode" of the resource group dictates the sharing of its
354 allocations. A "shareable" resource group allows sharing of its
355 allocations while an "exclusive" resource group does not. A
356 cache pseudo-locked region is created by first writing
357 "pseudo-locksetup" to the "mode" file before writing the cache
358 pseudo-locked region's schemata to the resource group's "schemata"
359 file. On successful pseudo-locked region creation the mode will
360 automatically change to "pseudo-locked".
363 Available only with debug option. The identifier used by hardware
364 for the control group. On x86 this is the CLOSID.
366 When monitoring is enabled all MON groups will also contain:
369 This contains a set of files organized by L3 domain and by
370 RDT event. E.g. on a system with two L3 domains there will
371 be subdirectories "mon_L3_00" and "mon_L3_01". Each of these
372 directories have one file per event (e.g. "llc_occupancy",
373 "mbm_total_bytes", and "mbm_local_bytes"). In a MON group these
374 files provide a read out of the current value of the event for
375 all tasks in the group. In CTRL_MON groups these files provide
376 the sum for all tasks in the CTRL_MON group and all tasks in
377 MON groups. Please see example section for more details on usage.
378 On systems with Sub-NUMA Cluster (SNC) enabled there are extra
379 directories for each node (located within the "mon_L3_XX" directory
380 for the L3 cache they occupy). These are named "mon_sub_L3_YY"
381 where "YY" is the node number.
384 Available only with debug option. The identifier used by hardware
385 for the monitor group. On x86 this is the RMID.
387 Resource allocation rules
388 -------------------------
390 When a task is running the following rules define which resources are
393 1) If the task is a member of a non-default group, then the schemata
394 for that group is used.
396 2) Else if the task belongs to the default group, but is running on a
397 CPU that is assigned to some specific group, then the schemata for the
400 3) Otherwise the schemata for the default group is used.
402 Resource monitoring rules
403 -------------------------
404 1) If a task is a member of a MON group, or non-default CTRL_MON group
405 then RDT events for the task will be reported in that group.
407 2) If a task is a member of the default CTRL_MON group, but is running
408 on a CPU that is assigned to some specific group, then the RDT events
409 for the task will be reported in that group.
411 3) Otherwise RDT events for the task will be reported in the root level
415 Notes on cache occupancy monitoring and control
416 ===============================================
417 When moving a task from one group to another you should remember that
418 this only affects *new* cache allocations by the task. E.g. you may have
419 a task in a monitor group showing 3 MB of cache occupancy. If you move
420 to a new group and immediately check the occupancy of the old and new
421 groups you will likely see that the old group is still showing 3 MB and
422 the new group zero. When the task accesses locations still in cache from
423 before the move, the h/w does not update any counters. On a busy system
424 you will likely see the occupancy in the old group go down as cache lines
425 are evicted and re-used while the occupancy in the new group rises as
426 the task accesses memory and loads into the cache are counted based on
427 membership in the new group.
429 The same applies to cache allocation control. Moving a task to a group
430 with a smaller cache partition will not evict any cache lines. The
431 process may continue to use them from the old partition.
433 Hardware uses CLOSid(Class of service ID) and an RMID(Resource monitoring ID)
434 to identify a control group and a monitoring group respectively. Each of
435 the resource groups are mapped to these IDs based on the kind of group. The
436 number of CLOSid and RMID are limited by the hardware and hence the creation of
437 a "CTRL_MON" directory may fail if we run out of either CLOSID or RMID
438 and creation of "MON" group may fail if we run out of RMIDs.
440 max_threshold_occupancy - generic concepts
441 ------------------------------------------
443 Note that an RMID once freed may not be immediately available for use as
444 the RMID is still tagged the cache lines of the previous user of RMID.
445 Hence such RMIDs are placed on limbo list and checked back if the cache
446 occupancy has gone down. If there is a time when system has a lot of
447 limbo RMIDs but which are not ready to be used, user may see an -EBUSY
450 max_threshold_occupancy is a user configurable value to determine the
451 occupancy at which an RMID can be freed.
453 The mon_llc_occupancy_limbo tracepoint gives the precise occupancy in bytes
454 for a subset of RMID that are not immediately available for allocation.
455 This can't be relied on to produce output every second, it may be necessary
456 to attempt to create an empty monitor group to force an update. Output may
457 only be produced if creation of a control or monitor group fails.
459 Schemata files - general concepts
460 ---------------------------------
461 Each line in the file describes one resource. The line starts with
462 the name of the resource, followed by specific values to be applied
463 in each of the instances of that resource on the system.
467 On current generation systems there is one L3 cache per socket and L2
468 caches are generally just shared by the hyperthreads on a core, but this
469 isn't an architectural requirement. We could have multiple separate L3
470 caches on a socket, multiple cores could share an L2 cache. So instead
471 of using "socket" or "core" to define the set of logical cpus sharing
472 a resource we use a "Cache ID". At a given cache level this will be a
473 unique number across the whole system (but it isn't guaranteed to be a
474 contiguous sequence, there may be gaps). To find the ID for each logical
475 CPU look in /sys/devices/system/cpu/cpu*/cache/index*/id
477 Cache Bit Masks (CBM)
478 ---------------------
479 For cache resources we describe the portion of the cache that is available
480 for allocation using a bitmask. The maximum value of the mask is defined
481 by each cpu model (and may be different for different cache levels). It
482 is found using CPUID, but is also provided in the "info" directory of
483 the resctrl file system in "info/{resource}/cbm_mask". Some Intel hardware
484 requires that these masks have all the '1' bits in a contiguous block. So
485 0x3, 0x6 and 0xC are legal 4-bit masks with two bits set, but 0x5, 0x9
486 and 0xA are not. Check /sys/fs/resctrl/info/{resource}/sparse_masks
487 if non-contiguous 1s value is supported. On a system with a 20-bit mask
488 each bit represents 5% of the capacity of the cache. You could partition
489 the cache into four equal parts with masks: 0x1f, 0x3e0, 0x7c00, 0xf8000.
491 Notes on Sub-NUMA Cluster mode
492 ==============================
493 When SNC mode is enabled, Linux may load balance tasks between Sub-NUMA
494 nodes much more readily than between regular NUMA nodes since the CPUs
495 on Sub-NUMA nodes share the same L3 cache and the system may report
496 the NUMA distance between Sub-NUMA nodes with a lower value than used
497 for regular NUMA nodes.
499 The top-level monitoring files in each "mon_L3_XX" directory provide
500 the sum of data across all SNC nodes sharing an L3 cache instance.
501 Users who bind tasks to the CPUs of a specific Sub-NUMA node can read
502 the "llc_occupancy", "mbm_total_bytes", and "mbm_local_bytes" in the
503 "mon_sub_L3_YY" directories to get node local data.
505 Memory bandwidth allocation is still performed at the L3 cache
506 level. I.e. throttling controls are applied to all SNC nodes.
508 L3 cache allocation bitmaps also apply to all SNC nodes. But note that
509 the amount of L3 cache represented by each bit is divided by the number
510 of SNC nodes per L3 cache. E.g. with a 100MB cache on a system with 10-bit
511 allocation masks each bit normally represents 10MB. With SNC mode enabled
512 with two SNC nodes per L3 cache, each bit only represents 5MB.
514 Memory bandwidth Allocation and monitoring
515 ==========================================
517 For Memory bandwidth resource, by default the user controls the resource
518 by indicating the percentage of total memory bandwidth.
520 The minimum bandwidth percentage value for each cpu model is predefined
521 and can be looked up through "info/MB/min_bandwidth". The bandwidth
522 granularity that is allocated is also dependent on the cpu model and can
523 be looked up at "info/MB/bandwidth_gran". The available bandwidth
524 control steps are: min_bw + N * bw_gran. Intermediate values are rounded
525 to the next control step available on the hardware.
527 The bandwidth throttling is a core specific mechanism on some of Intel
528 SKUs. Using a high bandwidth and a low bandwidth setting on two threads
529 sharing a core may result in both threads being throttled to use the
530 low bandwidth (see "thread_throttle_mode").
532 The fact that Memory bandwidth allocation(MBA) may be a core
533 specific mechanism where as memory bandwidth monitoring(MBM) is done at
534 the package level may lead to confusion when users try to apply control
535 via the MBA and then monitor the bandwidth to see if the controls are
536 effective. Below are such scenarios:
538 1. User may *not* see increase in actual bandwidth when percentage
539 values are increased:
541 This can occur when aggregate L2 external bandwidth is more than L3
542 external bandwidth. Consider an SKL SKU with 24 cores on a package and
543 where L2 external is 10GBps (hence aggregate L2 external bandwidth is
544 240GBps) and L3 external bandwidth is 100GBps. Now a workload with '20
545 threads, having 50% bandwidth, each consuming 5GBps' consumes the max L3
546 bandwidth of 100GBps although the percentage value specified is only 50%
547 << 100%. Hence increasing the bandwidth percentage will not yield any
548 more bandwidth. This is because although the L2 external bandwidth still
549 has capacity, the L3 external bandwidth is fully used. Also note that
550 this would be dependent on number of cores the benchmark is run on.
552 2. Same bandwidth percentage may mean different actual bandwidth
553 depending on # of threads:
555 For the same SKU in #1, a 'single thread, with 10% bandwidth' and '4
556 thread, with 10% bandwidth' can consume upto 10GBps and 40GBps although
557 they have same percentage bandwidth of 10%. This is simply because as
558 threads start using more cores in an rdtgroup, the actual bandwidth may
559 increase or vary although user specified bandwidth percentage is same.
561 In order to mitigate this and make the interface more user friendly,
562 resctrl added support for specifying the bandwidth in MiBps as well. The
563 kernel underneath would use a software feedback mechanism or a "Software
564 Controller(mba_sc)" which reads the actual bandwidth using MBM counters
565 and adjust the memory bandwidth percentages to ensure::
567 "actual bandwidth < user specified bandwidth".
569 By default, the schemata would take the bandwidth percentage values
570 where as user can switch to the "MBA software controller" mode using
571 a mount option 'mba_MBps'. The schemata format is specified in the below
574 L3 schemata file details (code and data prioritization disabled)
575 ----------------------------------------------------------------
576 With CDP disabled the L3 schemata format is::
578 L3:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
580 L3 schemata file details (CDP enabled via mount option to resctrl)
581 ------------------------------------------------------------------
582 When CDP is enabled L3 control is split into two separate resources
583 so you can specify independent masks for code and data like this::
585 L3DATA:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
586 L3CODE:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
588 L2 schemata file details
589 ------------------------
590 CDP is supported at L2 using the 'cdpl2' mount option. The schemata
593 L2:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
597 L2DATA:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
598 L2CODE:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
601 Memory bandwidth Allocation (default mode)
602 ------------------------------------------
604 Memory b/w domain is L3 cache.
607 MB:<cache_id0>=bandwidth0;<cache_id1>=bandwidth1;...
609 Memory bandwidth Allocation specified in MiBps
610 ----------------------------------------------
612 Memory bandwidth domain is L3 cache.
615 MB:<cache_id0>=bw_MiBps0;<cache_id1>=bw_MiBps1;...
617 Slow Memory Bandwidth Allocation (SMBA)
618 ---------------------------------------
619 AMD hardware supports Slow Memory Bandwidth Allocation (SMBA).
620 CXL.memory is the only supported "slow" memory device. With the
621 support of SMBA, the hardware enables bandwidth allocation on
622 the slow memory devices. If there are multiple such devices in
623 the system, the throttling logic groups all the slow sources
624 together and applies the limit on them as a whole.
626 The presence of SMBA (with CXL.memory) is independent of slow memory
627 devices presence. If there are no such devices on the system, then
628 configuring SMBA will have no impact on the performance of the system.
630 The bandwidth domain for slow memory is L3 cache. Its schemata file
634 SMBA:<cache_id0>=bandwidth0;<cache_id1>=bandwidth1;...
636 Reading/writing the schemata file
637 ---------------------------------
638 Reading the schemata file will show the state of all resources
639 on all domains. When writing you only need to specify those values
640 which you wish to change. E.g.
644 L3DATA:0=fffff;1=fffff;2=fffff;3=fffff
645 L3CODE:0=fffff;1=fffff;2=fffff;3=fffff
646 # echo "L3DATA:2=3c0;" > schemata
648 L3DATA:0=fffff;1=fffff;2=3c0;3=fffff
649 L3CODE:0=fffff;1=fffff;2=fffff;3=fffff
651 Reading/writing the schemata file (on AMD systems)
652 --------------------------------------------------
653 Reading the schemata file will show the current bandwidth limit on all
654 domains. The allocated resources are in multiples of one eighth GB/s.
655 When writing to the file, you need to specify what cache id you wish to
656 configure the bandwidth limit.
658 For example, to allocate 2GB/s limit on the first cache id:
663 MB:0=2048;1=2048;2=2048;3=2048
664 L3:0=ffff;1=ffff;2=ffff;3=ffff
666 # echo "MB:1=16" > schemata
668 MB:0=2048;1= 16;2=2048;3=2048
669 L3:0=ffff;1=ffff;2=ffff;3=ffff
671 Reading/writing the schemata file (on AMD systems) with SMBA feature
672 --------------------------------------------------------------------
673 Reading and writing the schemata file is the same as without SMBA in
676 For example, to allocate 8GB/s limit on the first cache id:
681 SMBA:0=2048;1=2048;2=2048;3=2048
682 MB:0=2048;1=2048;2=2048;3=2048
683 L3:0=ffff;1=ffff;2=ffff;3=ffff
685 # echo "SMBA:1=64" > schemata
687 SMBA:0=2048;1= 64;2=2048;3=2048
688 MB:0=2048;1=2048;2=2048;3=2048
689 L3:0=ffff;1=ffff;2=ffff;3=ffff
693 CAT enables a user to specify the amount of cache space that an
694 application can fill. Cache pseudo-locking builds on the fact that a
695 CPU can still read and write data pre-allocated outside its current
696 allocated area on a cache hit. With cache pseudo-locking, data can be
697 preloaded into a reserved portion of cache that no application can
698 fill, and from that point on will only serve cache hits. The cache
699 pseudo-locked memory is made accessible to user space where an
700 application can map it into its virtual address space and thus have
701 a region of memory with reduced average read latency.
703 The creation of a cache pseudo-locked region is triggered by a request
704 from the user to do so that is accompanied by a schemata of the region
705 to be pseudo-locked. The cache pseudo-locked region is created as follows:
707 - Create a CAT allocation CLOSNEW with a CBM matching the schemata
708 from the user of the cache region that will contain the pseudo-locked
709 memory. This region must not overlap with any current CAT allocation/CLOS
710 on the system and no future overlap with this cache region is allowed
711 while the pseudo-locked region exists.
712 - Create a contiguous region of memory of the same size as the cache
714 - Flush the cache, disable hardware prefetchers, disable preemption.
715 - Make CLOSNEW the active CLOS and touch the allocated memory to load
717 - Set the previous CLOS as active.
718 - At this point the closid CLOSNEW can be released - the cache
719 pseudo-locked region is protected as long as its CBM does not appear in
720 any CAT allocation. Even though the cache pseudo-locked region will from
721 this point on not appear in any CBM of any CLOS an application running with
722 any CLOS will be able to access the memory in the pseudo-locked region since
723 the region continues to serve cache hits.
724 - The contiguous region of memory loaded into the cache is exposed to
725 user-space as a character device.
727 Cache pseudo-locking increases the probability that data will remain
728 in the cache via carefully configuring the CAT feature and controlling
729 application behavior. There is no guarantee that data is placed in
730 cache. Instructions like INVD, WBINVD, CLFLUSH, etc. can still evict
731 “locked” data from cache. Power management C-states may shrink or
732 power off cache. Deeper C-states will automatically be restricted on
733 pseudo-locked region creation.
735 It is required that an application using a pseudo-locked region runs
736 with affinity to the cores (or a subset of the cores) associated
737 with the cache on which the pseudo-locked region resides. A sanity check
738 within the code will not allow an application to map pseudo-locked memory
739 unless it runs with affinity to cores associated with the cache on which the
740 pseudo-locked region resides. The sanity check is only done during the
741 initial mmap() handling, there is no enforcement afterwards and the
742 application self needs to ensure it remains affine to the correct cores.
744 Pseudo-locking is accomplished in two stages:
746 1) During the first stage the system administrator allocates a portion
747 of cache that should be dedicated to pseudo-locking. At this time an
748 equivalent portion of memory is allocated, loaded into allocated
749 cache portion, and exposed as a character device.
750 2) During the second stage a user-space application maps (mmap()) the
751 pseudo-locked memory into its address space.
753 Cache Pseudo-Locking Interface
754 ------------------------------
755 A pseudo-locked region is created using the resctrl interface as follows:
757 1) Create a new resource group by creating a new directory in /sys/fs/resctrl.
758 2) Change the new resource group's mode to "pseudo-locksetup" by writing
759 "pseudo-locksetup" to the "mode" file.
760 3) Write the schemata of the pseudo-locked region to the "schemata" file. All
761 bits within the schemata should be "unused" according to the "bit_usage"
764 On successful pseudo-locked region creation the "mode" file will contain
765 "pseudo-locked" and a new character device with the same name as the resource
766 group will exist in /dev/pseudo_lock. This character device can be mmap()'ed
767 by user space in order to obtain access to the pseudo-locked memory region.
769 An example of cache pseudo-locked region creation and usage can be found below.
771 Cache Pseudo-Locking Debugging Interface
772 ----------------------------------------
773 The pseudo-locking debugging interface is enabled by default (if
774 CONFIG_DEBUG_FS is enabled) and can be found in /sys/kernel/debug/resctrl.
776 There is no explicit way for the kernel to test if a provided memory
777 location is present in the cache. The pseudo-locking debugging interface uses
778 the tracing infrastructure to provide two ways to measure cache residency of
779 the pseudo-locked region:
781 1) Memory access latency using the pseudo_lock_mem_latency tracepoint. Data
782 from these measurements are best visualized using a hist trigger (see
783 example below). In this test the pseudo-locked region is traversed at
784 a stride of 32 bytes while hardware prefetchers and preemption
785 are disabled. This also provides a substitute visualization of cache
787 2) Cache hit and miss measurements using model specific precision counters if
788 available. Depending on the levels of cache on the system the pseudo_lock_l2
789 and pseudo_lock_l3 tracepoints are available.
791 When a pseudo-locked region is created a new debugfs directory is created for
792 it in debugfs as /sys/kernel/debug/resctrl/<newdir>. A single
793 write-only file, pseudo_lock_measure, is present in this directory. The
794 measurement of the pseudo-locked region depends on the number written to this
798 writing "1" to the pseudo_lock_measure file will trigger the latency
799 measurement captured in the pseudo_lock_mem_latency tracepoint. See
802 writing "2" to the pseudo_lock_measure file will trigger the L2 cache
803 residency (cache hits and misses) measurement captured in the
804 pseudo_lock_l2 tracepoint. See example below.
806 writing "3" to the pseudo_lock_measure file will trigger the L3 cache
807 residency (cache hits and misses) measurement captured in the
808 pseudo_lock_l3 tracepoint.
810 All measurements are recorded with the tracing infrastructure. This requires
811 the relevant tracepoints to be enabled before the measurement is triggered.
813 Example of latency debugging interface
814 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
815 In this example a pseudo-locked region named "newlock" was created. Here is
816 how we can measure the latency in cycles of reading from this region and
817 visualize this data with a histogram that is available if CONFIG_HIST_TRIGGERS
820 # :> /sys/kernel/tracing/trace
821 # echo 'hist:keys=latency' > /sys/kernel/tracing/events/resctrl/pseudo_lock_mem_latency/trigger
822 # echo 1 > /sys/kernel/tracing/events/resctrl/pseudo_lock_mem_latency/enable
823 # echo 1 > /sys/kernel/debug/resctrl/newlock/pseudo_lock_measure
824 # echo 0 > /sys/kernel/tracing/events/resctrl/pseudo_lock_mem_latency/enable
825 # cat /sys/kernel/tracing/events/resctrl/pseudo_lock_mem_latency/hist
829 # trigger info: hist:keys=latency:vals=hitcount:sort=hitcount:size=2048 [active]
832 { latency: 456 } hitcount: 1
833 { latency: 50 } hitcount: 83
834 { latency: 36 } hitcount: 96
835 { latency: 44 } hitcount: 174
836 { latency: 48 } hitcount: 195
837 { latency: 46 } hitcount: 262
838 { latency: 42 } hitcount: 693
839 { latency: 40 } hitcount: 3204
840 { latency: 38 } hitcount: 3484
847 Example of cache hits/misses debugging
848 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
849 In this example a pseudo-locked region named "newlock" was created on the L2
850 cache of a platform. Here is how we can obtain details of the cache hits
851 and misses using the platform's precision counters.
854 # :> /sys/kernel/tracing/trace
855 # echo 1 > /sys/kernel/tracing/events/resctrl/pseudo_lock_l2/enable
856 # echo 2 > /sys/kernel/debug/resctrl/newlock/pseudo_lock_measure
857 # echo 0 > /sys/kernel/tracing/events/resctrl/pseudo_lock_l2/enable
858 # cat /sys/kernel/tracing/trace
863 # / _----=> need-resched
864 # | / _---=> hardirq/softirq
865 # || / _--=> preempt-depth
867 # TASK-PID CPU# |||| TIMESTAMP FUNCTION
869 pseudo_lock_mea-1672 [002] .... 3132.860500: pseudo_lock_l2: hits=4097 miss=0
872 Examples for RDT allocation usage
873 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
877 On a two socket machine (one L3 cache per socket) with just four bits
878 for cache bit masks, minimum b/w of 10% with a memory bandwidth
882 # mount -t resctrl resctrl /sys/fs/resctrl
885 # echo "L3:0=3;1=c\nMB:0=50;1=50" > /sys/fs/resctrl/p0/schemata
886 # echo "L3:0=3;1=3\nMB:0=50;1=50" > /sys/fs/resctrl/p1/schemata
888 The default resource group is unmodified, so we have access to all parts
889 of all caches (its schemata file reads "L3:0=f;1=f").
891 Tasks that are under the control of group "p0" may only allocate from the
892 "lower" 50% on cache ID 0, and the "upper" 50% of cache ID 1.
893 Tasks in group "p1" use the "lower" 50% of cache on both sockets.
895 Similarly, tasks that are under the control of group "p0" may use a
896 maximum memory b/w of 50% on socket0 and 50% on socket 1.
897 Tasks in group "p1" may also use 50% memory b/w on both sockets.
898 Note that unlike cache masks, memory b/w cannot specify whether these
899 allocations can overlap or not. The allocations specifies the maximum
900 b/w that the group may be able to use and the system admin can configure
903 If resctrl is using the software controller (mba_sc) then user can enter the
904 max b/w in MB rather than the percentage values.
907 # echo "L3:0=3;1=c\nMB:0=1024;1=500" > /sys/fs/resctrl/p0/schemata
908 # echo "L3:0=3;1=3\nMB:0=1024;1=500" > /sys/fs/resctrl/p1/schemata
910 In the above example the tasks in "p1" and "p0" on socket 0 would use a max b/w
911 of 1024MB where as on socket 1 they would use 500MB.
915 Again two sockets, but this time with a more realistic 20-bit mask.
917 Two real time tasks pid=1234 running on processor 0 and pid=5678 running on
918 processor 1 on socket 0 on a 2-socket and dual core machine. To avoid noisy
919 neighbors, each of the two real-time tasks exclusively occupies one quarter
920 of L3 cache on socket 0.
923 # mount -t resctrl resctrl /sys/fs/resctrl
926 First we reset the schemata for the default group so that the "upper"
927 50% of the L3 cache on socket 0 and 50% of memory b/w cannot be used by
930 # echo "L3:0=3ff;1=fffff\nMB:0=50;1=100" > schemata
932 Next we make a resource group for our first real time task and give
933 it access to the "top" 25% of the cache on socket 0.
937 # echo "L3:0=f8000;1=fffff" > p0/schemata
939 Finally we move our first real time task into this resource group. We
940 also use taskset(1) to ensure the task always runs on a dedicated CPU
941 on socket 0. Most uses of resource groups will also constrain which
942 processors tasks run on.
945 # echo 1234 > p0/tasks
948 Ditto for the second real time task (with the remaining 25% of cache)::
951 # echo "L3:0=7c00;1=fffff" > p1/schemata
952 # echo 5678 > p1/tasks
955 For the same 2 socket system with memory b/w resource and CAT L3 the
956 schemata would look like(Assume min_bandwidth 10 and bandwidth_gran is
959 For our first real time task this would request 20% memory b/w on socket 0.
962 # echo -e "L3:0=f8000;1=fffff\nMB:0=20;1=100" > p0/schemata
964 For our second real time task this would request an other 20% memory b/w
968 # echo -e "L3:0=f8000;1=fffff\nMB:0=20;1=100" > p0/schemata
972 A single socket system which has real-time tasks running on core 4-7 and
973 non real-time workload assigned to core 0-3. The real-time tasks share text
974 and data, so a per task association is not required and due to interaction
975 with the kernel it's desired that the kernel on these cores shares L3 with
979 # mount -t resctrl resctrl /sys/fs/resctrl
982 First we reset the schemata for the default group so that the "upper"
983 50% of the L3 cache on socket 0, and 50% of memory bandwidth on socket 0
984 cannot be used by ordinary tasks::
986 # echo "L3:0=3ff\nMB:0=50" > schemata
988 Next we make a resource group for our real time cores and give it access
989 to the "top" 50% of the cache on socket 0 and 50% of memory bandwidth on
994 # echo "L3:0=ffc00\nMB:0=50" > p0/schemata
996 Finally we move core 4-7 over to the new group and make sure that the
997 kernel and the tasks running there get 50% of the cache. They should
998 also get 50% of memory bandwidth assuming that the cores 4-7 are SMT
999 siblings and only the real time threads are scheduled on the cores 4-7.
1006 The resource groups in previous examples were all in the default "shareable"
1007 mode allowing sharing of their cache allocations. If one resource group
1008 configures a cache allocation then nothing prevents another resource group
1009 to overlap with that allocation.
1011 In this example a new exclusive resource group will be created on a L2 CAT
1012 system with two L2 cache instances that can be configured with an 8-bit
1013 capacity bitmask. The new exclusive resource group will be configured to use
1014 25% of each cache instance.
1017 # mount -t resctrl resctrl /sys/fs/resctrl/
1018 # cd /sys/fs/resctrl
1020 First, we observe that the default group is configured to allocate to all L2
1026 We could attempt to create the new resource group at this point, but it will
1027 fail because of the overlap with the schemata of the default group::
1030 # echo 'L2:0=0x3;1=0x3' > p0/schemata
1033 # echo exclusive > p0/mode
1034 -sh: echo: write error: Invalid argument
1035 # cat info/last_cmd_status
1038 To ensure that there is no overlap with another resource group the default
1039 resource group's schemata has to change, making it possible for the new
1040 resource group to become exclusive.
1043 # echo 'L2:0=0xfc;1=0xfc' > schemata
1044 # echo exclusive > p0/mode
1048 p0/schemata:L2:0=03;1=03
1049 p0/size:L2:0=262144;1=262144
1051 A new resource group will on creation not overlap with an exclusive resource
1058 p1/schemata:L2:0=fc;1=fc
1059 p1/size:L2:0=786432;1=786432
1061 The bit_usage will reflect how the cache is used::
1063 # cat info/L2/bit_usage
1064 0=SSSSSSEE;1=SSSSSSEE
1066 A resource group cannot be forced to overlap with an exclusive resource group::
1068 # echo 'L2:0=0x1;1=0x1' > p1/schemata
1069 -sh: echo: write error: Invalid argument
1070 # cat info/last_cmd_status
1071 overlaps with exclusive group
1073 Example of Cache Pseudo-Locking
1074 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1075 Lock portion of L2 cache from cache id 1 using CBM 0x3. Pseudo-locked
1076 region is exposed at /dev/pseudo_lock/newlock that can be provided to
1077 application for argument to mmap().
1080 # mount -t resctrl resctrl /sys/fs/resctrl/
1081 # cd /sys/fs/resctrl
1083 Ensure that there are bits available that can be pseudo-locked, since only
1084 unused bits can be pseudo-locked the bits to be pseudo-locked needs to be
1085 removed from the default resource group's schemata::
1087 # cat info/L2/bit_usage
1088 0=SSSSSSSS;1=SSSSSSSS
1089 # echo 'L2:1=0xfc' > schemata
1090 # cat info/L2/bit_usage
1091 0=SSSSSSSS;1=SSSSSS00
1093 Create a new resource group that will be associated with the pseudo-locked
1094 region, indicate that it will be used for a pseudo-locked region, and
1095 configure the requested pseudo-locked region capacity bitmask::
1098 # echo pseudo-locksetup > newlock/mode
1099 # echo 'L2:1=0x3' > newlock/schemata
1101 On success the resource group's mode will change to pseudo-locked, the
1102 bit_usage will reflect the pseudo-locked region, and the character device
1103 exposing the pseudo-locked region will exist::
1107 # cat info/L2/bit_usage
1108 0=SSSSSSSS;1=SSSSSSPP
1109 # ls -l /dev/pseudo_lock/newlock
1110 crw------- 1 root root 243, 0 Apr 3 05:01 /dev/pseudo_lock/newlock
1115 * Example code to access one page of pseudo-locked cache region
1124 #include <sys/mman.h>
1127 * It is required that the application runs with affinity to only
1128 * cores associated with the pseudo-locked region. Here the cpu
1129 * is hardcoded for convenience of example.
1131 static int cpuid = 2;
1133 int main(int argc, char *argv[])
1141 page_size = sysconf(_SC_PAGESIZE);
1144 CPU_SET(cpuid, &cpuset);
1145 ret = sched_setaffinity(0, sizeof(cpuset), &cpuset);
1147 perror("sched_setaffinity");
1151 dev_fd = open("/dev/pseudo_lock/newlock", O_RDWR);
1157 mapping = mmap(0, page_size, PROT_READ | PROT_WRITE, MAP_SHARED,
1159 if (mapping == MAP_FAILED) {
1165 /* Application interacts with pseudo-locked memory @mapping */
1167 ret = munmap(mapping, page_size);
1178 Locking between applications
1179 ----------------------------
1181 Certain operations on the resctrl filesystem, composed of read/writes
1182 to/from multiple files, must be atomic.
1184 As an example, the allocation of an exclusive reservation of L3 cache
1187 1. Read the cbmmasks from each directory or the per-resource "bit_usage"
1188 2. Find a contiguous set of bits in the global CBM bitmask that is clear
1189 in any of the directory cbmmasks
1190 3. Create a new directory
1191 4. Set the bits found in step 2 to the new directory "schemata" file
1193 If two applications attempt to allocate space concurrently then they can
1194 end up allocating the same bits so the reservations are shared instead of
1197 To coordinate atomic operations on the resctrlfs and to avoid the problem
1198 above, the following locking procedure is recommended:
1200 Locking is based on flock, which is available in libc and also as a shell
1205 A) Take flock(LOCK_EX) on /sys/fs/resctrl
1206 B) Read/write the directory structure.
1211 A) Take flock(LOCK_SH) on /sys/fs/resctrl
1212 B) If success read the directory structure.
1217 # Atomically read directory structure
1218 $ flock -s /sys/fs/resctrl/ find /sys/fs/resctrl
1220 # Read directory contents and create new subdirectory
1223 find /sys/fs/resctrl/ > output.txt
1224 mask = function-of(output.txt)
1225 mkdir /sys/fs/resctrl/newres/
1226 echo mask > /sys/fs/resctrl/newres/schemata
1228 $ flock /sys/fs/resctrl/ ./create-dir.sh
1233 * Example code do take advisory locks
1234 * before accessing resctrl filesystem
1236 #include <sys/file.h>
1239 void resctrl_take_shared_lock(int fd)
1243 /* take shared lock on resctrl filesystem */
1244 ret = flock(fd, LOCK_SH);
1251 void resctrl_take_exclusive_lock(int fd)
1255 /* release lock on resctrl filesystem */
1256 ret = flock(fd, LOCK_EX);
1263 void resctrl_release_lock(int fd)
1267 /* take shared lock on resctrl filesystem */
1268 ret = flock(fd, LOCK_UN);
1279 fd = open("/sys/fs/resctrl", O_DIRECTORY);
1284 resctrl_take_shared_lock(fd);
1285 /* code to read directory contents */
1286 resctrl_release_lock(fd);
1288 resctrl_take_exclusive_lock(fd);
1289 /* code to read and write directory contents */
1290 resctrl_release_lock(fd);
1293 Examples for RDT Monitoring along with allocation usage
1294 =======================================================
1295 Reading monitored data
1296 ----------------------
1297 Reading an event file (for ex: mon_data/mon_L3_00/llc_occupancy) would
1298 show the current snapshot of LLC occupancy of the corresponding MON
1299 group or CTRL_MON group.
1302 Example 1 (Monitor CTRL_MON group and subset of tasks in CTRL_MON group)
1303 ------------------------------------------------------------------------
1304 On a two socket machine (one L3 cache per socket) with just four bits
1305 for cache bit masks::
1307 # mount -t resctrl resctrl /sys/fs/resctrl
1308 # cd /sys/fs/resctrl
1310 # echo "L3:0=3;1=c" > /sys/fs/resctrl/p0/schemata
1311 # echo "L3:0=3;1=3" > /sys/fs/resctrl/p1/schemata
1312 # echo 5678 > p1/tasks
1313 # echo 5679 > p1/tasks
1315 The default resource group is unmodified, so we have access to all parts
1316 of all caches (its schemata file reads "L3:0=f;1=f").
1318 Tasks that are under the control of group "p0" may only allocate from the
1319 "lower" 50% on cache ID 0, and the "upper" 50% of cache ID 1.
1320 Tasks in group "p1" use the "lower" 50% of cache on both sockets.
1322 Create monitor groups and assign a subset of tasks to each monitor group.
1325 # cd /sys/fs/resctrl/p1/mon_groups
1327 # echo 5678 > m11/tasks
1328 # echo 5679 > m12/tasks
1330 fetch data (data shown in bytes)
1333 # cat m11/mon_data/mon_L3_00/llc_occupancy
1335 # cat m11/mon_data/mon_L3_01/llc_occupancy
1337 # cat m12/mon_data/mon_L3_00/llc_occupancy
1340 The parent ctrl_mon group shows the aggregated data.
1343 # cat /sys/fs/resctrl/p1/mon_data/mon_l3_00/llc_occupancy
1346 Example 2 (Monitor a task from its creation)
1347 --------------------------------------------
1348 On a two socket machine (one L3 cache per socket)::
1350 # mount -t resctrl resctrl /sys/fs/resctrl
1351 # cd /sys/fs/resctrl
1354 An RMID is allocated to the group once its created and hence the <cmd>
1355 below is monitored from its creation.
1358 # echo $$ > /sys/fs/resctrl/p1/tasks
1363 # cat /sys/fs/resctrl/p1/mon_data/mon_l3_00/llc_occupancy
1366 Example 3 (Monitor without CAT support or before creating CAT groups)
1367 ---------------------------------------------------------------------
1369 Assume a system like HSW has only CQM and no CAT support. In this case
1370 the resctrl will still mount but cannot create CTRL_MON directories.
1371 But user can create different MON groups within the root group thereby
1372 able to monitor all tasks including kernel threads.
1374 This can also be used to profile jobs cache size footprint before being
1375 able to allocate them to different allocation groups.
1378 # mount -t resctrl resctrl /sys/fs/resctrl
1379 # cd /sys/fs/resctrl
1380 # mkdir mon_groups/m01
1381 # mkdir mon_groups/m02
1383 # echo 3478 > /sys/fs/resctrl/mon_groups/m01/tasks
1384 # echo 2467 > /sys/fs/resctrl/mon_groups/m02/tasks
1386 Monitor the groups separately and also get per domain data. From the
1387 below its apparent that the tasks are mostly doing work on
1391 # cat /sys/fs/resctrl/mon_groups/m01/mon_L3_00/llc_occupancy
1393 # cat /sys/fs/resctrl/mon_groups/m01/mon_L3_01/llc_occupancy
1395 # cat /sys/fs/resctrl/mon_groups/m02/mon_L3_00/llc_occupancy
1397 # cat /sys/fs/resctrl/mon_groups/m02/mon_L3_01/llc_occupancy
1401 Example 4 (Monitor real time tasks)
1402 -----------------------------------
1404 A single socket system which has real time tasks running on cores 4-7
1405 and non real time tasks on other cpus. We want to monitor the cache
1406 occupancy of the real time threads on these cores.
1409 # mount -t resctrl resctrl /sys/fs/resctrl
1410 # cd /sys/fs/resctrl
1413 Move the cpus 4-7 over to p1::
1417 View the llc occupancy snapshot::
1419 # cat /sys/fs/resctrl/p1/mon_data/mon_L3_00/llc_occupancy
1425 Intel MBM Counters May Report System Memory Bandwidth Incorrectly
1426 -----------------------------------------------------------------
1428 Errata SKX99 for Skylake server and BDF102 for Broadwell server.
1430 Problem: Intel Memory Bandwidth Monitoring (MBM) counters track metrics
1431 according to the assigned Resource Monitor ID (RMID) for that logical
1432 core. The IA32_QM_CTR register (MSR 0xC8E), used to report these
1433 metrics, may report incorrect system bandwidth for certain RMID values.
1435 Implication: Due to the errata, system memory bandwidth may not match
1438 Workaround: MBM total and local readings are corrected according to the
1439 following correction factor table:
1441 +---------------+---------------+---------------+-----------------+
1442 |core count |rmid count |rmid threshold |correction factor|
1443 +---------------+---------------+---------------+-----------------+
1444 |1 |8 |0 |1.000000 |
1445 +---------------+---------------+---------------+-----------------+
1446 |2 |16 |0 |1.000000 |
1447 +---------------+---------------+---------------+-----------------+
1448 |3 |24 |15 |0.969650 |
1449 +---------------+---------------+---------------+-----------------+
1450 |4 |32 |0 |1.000000 |
1451 +---------------+---------------+---------------+-----------------+
1452 |6 |48 |31 |0.969650 |
1453 +---------------+---------------+---------------+-----------------+
1454 |7 |56 |47 |1.142857 |
1455 +---------------+---------------+---------------+-----------------+
1456 |8 |64 |0 |1.000000 |
1457 +---------------+---------------+---------------+-----------------+
1458 |9 |72 |63 |1.185115 |
1459 +---------------+---------------+---------------+-----------------+
1460 |10 |80 |63 |1.066553 |
1461 +---------------+---------------+---------------+-----------------+
1462 |11 |88 |79 |1.454545 |
1463 +---------------+---------------+---------------+-----------------+
1464 |12 |96 |0 |1.000000 |
1465 +---------------+---------------+---------------+-----------------+
1466 |13 |104 |95 |1.230769 |
1467 +---------------+---------------+---------------+-----------------+
1468 |14 |112 |95 |1.142857 |
1469 +---------------+---------------+---------------+-----------------+
1470 |15 |120 |95 |1.066667 |
1471 +---------------+---------------+---------------+-----------------+
1472 |16 |128 |0 |1.000000 |
1473 +---------------+---------------+---------------+-----------------+
1474 |17 |136 |127 |1.254863 |
1475 +---------------+---------------+---------------+-----------------+
1476 |18 |144 |127 |1.185255 |
1477 +---------------+---------------+---------------+-----------------+
1478 |19 |152 |0 |1.000000 |
1479 +---------------+---------------+---------------+-----------------+
1480 |20 |160 |127 |1.066667 |
1481 +---------------+---------------+---------------+-----------------+
1482 |21 |168 |0 |1.000000 |
1483 +---------------+---------------+---------------+-----------------+
1484 |22 |176 |159 |1.454334 |
1485 +---------------+---------------+---------------+-----------------+
1486 |23 |184 |0 |1.000000 |
1487 +---------------+---------------+---------------+-----------------+
1488 |24 |192 |127 |0.969744 |
1489 +---------------+---------------+---------------+-----------------+
1490 |25 |200 |191 |1.280246 |
1491 +---------------+---------------+---------------+-----------------+
1492 |26 |208 |191 |1.230921 |
1493 +---------------+---------------+---------------+-----------------+
1494 |27 |216 |0 |1.000000 |
1495 +---------------+---------------+---------------+-----------------+
1496 |28 |224 |191 |1.143118 |
1497 +---------------+---------------+---------------+-----------------+
1499 If rmid > rmid threshold, MBM total and local values should be multiplied
1500 by the correction factor.
1504 1. Erratum SKX99 in Intel Xeon Processor Scalable Family Specification Update:
1505 http://web.archive.org/web/20200716124958/https://www.intel.com/content/www/us/en/processors/xeon/scalable/xeon-scalable-spec-update.html
1507 2. Erratum BDF102 in Intel Xeon E5-2600 v4 Processor Product Family Specification Update:
1508 http://web.archive.org/web/20191125200531/https://www.intel.com/content/dam/www/public/us/en/documents/specification-updates/xeon-e5-v4-spec-update.pdf
1510 3. The errata in Intel Resource Director Technology (Intel RDT) on 2nd Generation Intel Xeon Scalable Processors Reference Manual:
1511 https://software.intel.com/content/www/us/en/develop/articles/intel-resource-director-technology-rdt-reference-manual.html
1513 for further information.