1 .. include:: <isonum.txt>
3 ============================================
4 Reliability, Availability and Serviceability
5 ============================================
10 Reliability, Availability and Serviceability (RAS) is a concept used on
11 servers meant to measure their robustness.
14 is the probability that a system will produce correct outputs.
16 * Generally measured as Mean Time Between Failures (MTBF)
17 * Enhanced by features that help to avoid, detect and repair hardware faults
20 is the probability that a system is operational at a given time
22 * Generally measured as a percentage of downtime per a period of time
23 * Often uses mechanisms to detect and correct hardware faults in
26 Serviceability (or maintainability)
27 is the simplicity and speed with which a system can be repaired or
30 * Generally measured on Mean Time Between Repair (MTBR)
35 In order to reduce systems downtime, a system should be capable of detecting
36 hardware errors, and, when possible correcting them in runtime. It should
37 also provide mechanisms to detect hardware degradation, in order to warn
38 the system administrator to take the action of replacing a component before
39 it causes data loss or system downtime.
41 Among the monitoring measures, the most usual ones include:
43 * CPU – detect errors at instruction execution and at L1/L2/L3 caches;
44 * Memory – add error correction logic (ECC) to detect and correct errors;
45 * I/O – add CRC checksums for transferred data;
46 * Storage – RAID, journal file systems, checksums,
47 Self-Monitoring, Analysis and Reporting Technology (SMART).
49 By monitoring the number of occurrences of error detections, it is possible
50 to identify if the probability of hardware errors is increasing, and, on such
51 case, do a preventive maintenance to replace a degraded component while
52 those errors are correctable.
57 Most mechanisms used on modern systems use technologies like Hamming
58 Codes that allow error correction when the number of errors on a bit packet
59 is below a threshold. If the number of errors is above, those mechanisms
60 can indicate with a high degree of confidence that an error happened, but
63 Also, sometimes an error occur on a component that it is not used. For
64 example, a part of the memory that it is not currently allocated.
66 That defines some categories of errors:
68 * **Correctable Error (CE)** - the error detection mechanism detected and
69 corrected the error. Such errors are usually not fatal, although some
70 Kernel mechanisms allow the system administrator to consider them as fatal.
72 * **Uncorrected Error (UE)** - the amount of errors happened above the error
73 correction threshold, and the system was unable to auto-correct.
75 * **Fatal Error** - when an UE error happens on a critical component of the
76 system (for example, a piece of the Kernel got corrupted by an UE), the
77 only reliable way to avoid data corruption is to hang or reboot the machine.
79 * **Non-fatal Error** - when an UE error happens on an unused component,
80 like a CPU in power down state or an unused memory bank, the system may
81 still run, eventually replacing the affected hardware by a hot spare,
84 Also, when an error happens on a userspace process, it is also possible to
85 kill such process and let userspace restart it.
87 The mechanism for handling non-fatal errors is usually complex and may
88 require the help of some userspace application, in order to apply the
89 policy desired by the system administrator.
91 Identifying a bad hardware component
92 ------------------------------------
94 Just detecting a hardware flaw is usually not enough, as the system needs
95 to pinpoint to the minimal replaceable unit (MRU) that should be exchanged
96 to make the hardware reliable again.
98 So, it requires not only error logging facilities, but also mechanisms that
99 will translate the error message to the silkscreen or component label for
102 Typically, it is very complex for memory, as modern CPUs interlace memory
103 from different memory modules, in order to provide a better performance. The
104 DMI BIOS usually have a list of memory module labels, with can be obtained
105 using the ``dmidecode`` tool. For example, on a desktop machine, it shows::
113 Locator: ChannelA-DIMM0
116 Type Detail: Synchronous
119 Configured Clock Speed: 2133 MHz
121 On the above example, a DDR4 SO-DIMM memory module is located at the
122 system's memory labeled as "BANK 0", as given by the *bank locator* field.
123 Please notice that, on such system, the *total width* is equal to the
124 *data width*. It means that such memory module doesn't have error
125 detection/correction mechanisms.
127 Unfortunately, not all systems use the same field to specify the memory
128 bank. On this example, from an older server, ``dmidecode`` shows::
132 Error Information Handle: Not Provided
139 Bank Locator: Not Specified
141 Type Detail: Synchronous Registered (Buffered)
144 Configured Clock Speed: 1600 MHz
146 There, the DDR3 RDIMM memory module is located at the system's memory labeled
147 as "DIMM_A1", as given by the *locator* field. Please notice that this
148 memory module has 64 bits of *data width* and 72 bits of *total width*. So,
149 it has 8 extra bits to be used by error detection and correction mechanisms.
150 Such kind of memory is called Error-correcting code memory (ECC memory).
152 To make things even worse, it is not uncommon that systems with different
153 labels on their system's board to use exactly the same BIOS, meaning that
154 the labels provided by the BIOS won't match the real ones.
159 As mentioned on the previous section, ECC memory has extra bits to be
160 used for error correction. So, on 64 bit systems, a memory module
161 has 64 bits of *data width*, and 74 bits of *total width*. So, there are
162 8 bits extra bits to be used for the error detection and correction
163 mechanisms. Those extra bits are called *syndrome*\ [#f1]_\ [#f2]_.
165 So, when the cpu requests the memory controller to write a word with
166 *data width*, the memory controller calculates the *syndrome* in real time,
167 using Hamming code, or some other error correction code, like SECDED+,
168 producing a code with *total width* size. Such code is then written
169 on the memory modules.
171 At read, the *total width* bits code is converted back, using the same
172 ECC code used on write, producing a word with *data width* and a *syndrome*.
173 The word with *data width* is sent to the CPU, even when errors happen.
175 The memory controller also looks at the *syndrome* in order to check if
176 there was an error, and if the ECC code was able to fix such error.
177 If the error was corrected, a Corrected Error (CE) happened. If not, an
178 Uncorrected Error (UE) happened.
180 The information about the CE/UE errors is stored on some special registers
181 at the memory controller and can be accessed by reading such registers,
182 either by BIOS, by some special CPUs or by Linux EDAC driver. On x86 64
183 bit CPUs, such errors can also be retrieved via the Machine Check
184 Architecture (MCA)\ [#f3]_.
186 .. [#f1] Please notice that several memory controllers allow operation on a
187 mode called "Lock-Step", where it groups two memory modules together,
188 doing 128-bit reads/writes. That gives 16 bits for error correction, with
189 significantly improves the error correction mechanism, at the expense
190 that, when an error happens, there's no way to know what memory module is
191 to blame. So, it has to blame both memory modules.
193 .. [#f2] Some memory controllers also allow using memory in mirror mode.
194 On such mode, the same data is written to two memory modules. At read,
195 the system checks both memory modules, in order to check if both provide
196 identical data. On such configuration, when an error happens, there's no
197 way to know what memory module is to blame. So, it has to blame both
198 memory modules (or 4 memory modules, if the system is also on Lock-step
201 .. [#f3] For more details about the Machine Check Architecture (MCA),
202 please read Documentation/x86/x86_64/machinecheck.rst at the Kernel tree.
204 EDAC - Error Detection And Correction
205 *************************************
209 "bluesmoke" was the name for this device driver subsystem when it
210 was "out-of-tree" and maintained at http://bluesmoke.sourceforge.net.
211 That site is mostly archaic now and can be used only for historical
214 When the subsystem was pushed upstream for the first time, on
215 Kernel 2.6.16, for the first time, it was renamed to ``EDAC``.
220 The ``edac`` kernel module's goal is to detect and report hardware errors
221 that occur within the computer system running under linux.
226 Memory Correctable Errors (CE) and Uncorrectable Errors (UE) are the
227 primary errors being harvested. These types of errors are harvested by
228 the ``edac_mc`` device.
230 Detecting CE events, then harvesting those events and reporting them,
231 **can** but must not necessarily be a predictor of future UE events. With
232 CE events only, the system can and will continue to operate as no data
233 has been damaged yet.
235 However, preventive maintenance and proactive part replacement of memory
236 modules exhibiting CEs can reduce the likelihood of the dreaded UE events
239 Other hardware elements
240 -----------------------
242 A new feature for EDAC, the ``edac_device`` class of device, was added in
243 the 2.6.23 version of the kernel.
245 This new device type allows for non-memory type of ECC hardware detectors
246 to have their states harvested and presented to userspace via the sysfs
249 Some architectures have ECC detectors for L1, L2 and L3 caches,
250 along with DMA engines, fabric switches, main data path switches,
251 interconnections, and various other hardware data paths. If the hardware
252 reports it, then a edac_device device probably can be constructed to
253 harvest and present that to userspace.
259 In addition, PCI devices are scanned for PCI Bus Parity and SERR Errors
260 in order to determine if errors are occurring during data transfers.
262 The presence of PCI Parity errors must be examined with a grain of salt.
263 There are several add-in adapters that do **not** follow the PCI specification
264 with regards to Parity generation and reporting. The specification says
265 the vendor should tie the parity status bits to 0 if they do not intend
266 to generate parity. Some vendors do not do this, and thus the parity bit
267 can "float" giving false positives.
269 There is a PCI device attribute located in sysfs that is checked by
270 the EDAC PCI scanning code. If that attribute is set, PCI parity/error
271 scanning is skipped for that device. The attribute is::
275 and is located in ``/sys/devices/pci<XXX>/0000:XX:YY.Z`` directories for
282 EDAC is composed of a "core" module (``edac_core.ko``) and several Memory
283 Controller (MC) driver modules. On a given system, the CORE is loaded
284 and one MC driver will be loaded. Both the CORE and the MC driver (or
285 ``edac_device`` driver) have individual versions that reflect current
286 release level of their respective modules.
288 Thus, to "report" on what version a system is running, one must report
289 both the CORE's and the MC driver's versions.
295 If ``edac`` was statically linked with the kernel then no loading
296 is necessary. If ``edac`` was built as modules then simply modprobe
297 the ``edac`` pieces that you need. You should be able to modprobe
298 hardware-specific modules and have the dependencies load the necessary
303 $ modprobe amd76x_edac
305 loads both the ``amd76x_edac.ko`` memory controller module and the
306 ``edac_mc.ko`` core module.
312 EDAC presents a ``sysfs`` interface for control and reporting purposes. It
313 lives in the /sys/devices/system/edac directory.
315 Within this directory there currently reside 2 components:
317 ======= ==============================
318 mc memory controller(s) system
319 pci PCI control and status system
320 ======= ==============================
324 Memory Controller (mc) Model
325 ----------------------------
327 Each ``mc`` device controls a set of memory modules [#f4]_. These modules
328 are laid out in a Chip-Select Row (``csrowX``) and Channel table (``chX``).
329 There can be multiple csrows and multiple channels.
331 .. [#f4] Nowadays, the term DIMM (Dual In-line Memory Module) is widely
332 used to refer to a memory module, although there are other memory
333 packaging alternatives, like SO-DIMM, SIMM, etc. The UEFI
334 specification (Version 2.7) defines a memory module in the Common
335 Platform Error Record (CPER) section to be an SMBIOS Memory Device
336 (Type 17). Along this document, and inside the EDAC subsystem, the term
337 "dimm" is used for all memory modules, even when they use a
338 different kind of packaging.
340 Memory controllers allow for several csrows, with 8 csrows being a
341 typical value. Yet, the actual number of csrows depends on the layout of
342 a given motherboard, memory controller and memory module characteristics.
344 Dual channels allow for dual data length (e. g. 128 bits, on 64 bit systems)
345 data transfers to/from the CPU from/to memory. Some newer chipsets allow
346 for more than 2 channels, like Fully Buffered DIMMs (FB-DIMMs) memory
347 controllers. The following example will assume 2 channels:
349 +------------+-----------------------+
350 | CS Rows | Channels |
351 +------------+-----------+-----------+
352 | | ``ch0`` | ``ch1`` |
353 +============+===========+===========+
354 | ``csrow0`` | DIMM_A0 | DIMM_B0 |
356 +------------+ - | - |
357 | ``csrow1`` | rank1 | rank1 |
358 +------------+-----------+-----------+
359 | ``csrow2`` | DIMM_A1 | DIMM_B1 |
361 +------------+ - | - |
362 | ``csrow3`` | rank1 | rank1 |
363 +------------+-----------+-----------+
365 In the above example, there are 4 physical slots on the motherboard
368 +---------+---------+
369 | DIMM_A0 | DIMM_B0 |
370 +---------+---------+
371 | DIMM_A1 | DIMM_B1 |
372 +---------+---------+
374 Labels for these slots are usually silk-screened on the motherboard.
375 Slots labeled ``A`` are channel 0 in this example. Slots labeled ``B`` are
376 channel 1. Notice that there are two csrows possible on a physical DIMM.
377 These csrows are allocated their csrow assignment based on the slot into
378 which the memory DIMM is placed. Thus, when 1 DIMM is placed in each
379 Channel, the csrows cross both DIMMs.
381 Memory DIMMs come single or dual "ranked". A rank is a populated csrow.
382 In the example above 2 dual ranked DIMMs are similarly placed. Thus,
383 both csrow0 and csrow1 are populated. On the other hand, when 2 single
384 ranked DIMMs are placed in slots DIMM_A0 and DIMM_B0, then they will
385 have just one csrow (csrow0) and csrow1 will be empty. The pattern
386 repeats itself for csrow2 and csrow3. Also note that some memory
387 controllers don't have any logic to identify the memory module, see
388 ``rankX`` directories below.
390 The representation of the above is reflected in the directory
391 tree in EDAC's sysfs interface. Starting in directory
392 ``/sys/devices/system/edac/mc``, each memory controller will be
393 represented by its own ``mcX`` directory, where ``X`` is the
403 Under each ``mcX`` directory each ``csrowX`` is again represented by a
404 ``csrowX``, where ``X`` is the csrow index::
413 Notice that there is no csrow1, which indicates that csrow0 is composed
414 of a single ranked DIMMs. This should also apply in both Channels, in
415 order to have dual-channel mode be operational. Since both csrow2 and
416 csrow3 are populated, this indicates a dual ranked set of DIMMs for
419 Within each of the ``mcX`` and ``csrowX`` directories are several EDAC
420 control and attribute files.
425 In ``mcX`` directories are EDAC control and attribute files for
426 this ``X`` instance of the memory controllers.
428 For a description of the sysfs API, please see:
430 Documentation/ABI/testing/sysfs-devices-edac
433 ``dimmX`` or ``rankX`` directories
434 ----------------------------------
436 The recommended way to use the EDAC subsystem is to look at the information
437 provided by the ``dimmX`` or ``rankX`` directories [#f5]_.
439 A typical EDAC system has the following structure under
440 ``/sys/devices/system/edac/``\ [#f6]_::
442 /sys/devices/system/edac/
446 │ │ ├── ce_noinfo_count
448 │ │ │ ├── dimm_ce_count
449 │ │ │ ├── dimm_dev_type
450 │ │ │ ├── dimm_edac_mode
452 │ │ │ ├── dimm_location
453 │ │ │ ├── dimm_mem_type
454 │ │ │ ├── dimm_ue_count
459 │ │ ├── reset_counters
460 │ │ ├── seconds_since_reset
463 │ │ ├── ue_noinfo_count
467 │ │ ├── ce_noinfo_count
469 │ │ │ ├── dimm_ce_count
470 │ │ │ ├── dimm_dev_type
471 │ │ │ ├── dimm_edac_mode
473 │ │ │ ├── dimm_location
474 │ │ │ ├── dimm_mem_type
475 │ │ │ ├── dimm_ue_count
480 │ │ ├── reset_counters
481 │ │ ├── seconds_since_reset
484 │ │ ├── ue_noinfo_count
489 In the ``dimmX`` directories are EDAC control and attribute files for
490 this ``X`` memory module:
492 - ``size`` - Total memory managed by this csrow attribute file
494 This attribute file displays, in count of megabytes, the memory
495 that this csrow contains.
497 - ``dimm_ue_count`` - Uncorrectable Errors count attribute file
499 This attribute file displays the total count of uncorrectable
500 errors that have occurred on this DIMM. If panic_on_ue is set
501 this counter will not have a chance to increment, since EDAC
502 will panic the system.
504 - ``dimm_ce_count`` - Correctable Errors count attribute file
506 This attribute file displays the total count of correctable
507 errors that have occurred on this DIMM. This count is very
508 important to examine. CEs provide early indications that a
509 DIMM is beginning to fail. This count field should be
510 monitored for non-zero values and report such information
511 to the system administrator.
513 - ``dimm_dev_type`` - Device type attribute file
515 This attribute file will display what type of DRAM device is
516 being utilized on this DIMM.
524 - ``dimm_edac_mode`` - EDAC Mode of operation attribute file
526 This attribute file will display what type of Error detection
527 and correction is being utilized.
529 - ``dimm_label`` - memory module label control file
531 This control file allows this DIMM to have a label assigned
532 to it. With this label in the module, when errors occur
533 the output can provide the DIMM label in the system log.
534 This becomes vital for panic events to isolate the
535 cause of the UE event.
537 DIMM Labels must be assigned after booting, with information
538 that correctly identifies the physical slot with its
539 silk screen label. This information is currently very
540 motherboard specific and determination of this information
541 must occur in userland at this time.
543 - ``dimm_location`` - location of the memory module
545 The location can have up to 3 levels, and describe how the
546 memory controller identifies the location of a memory module.
547 Depending on the type of memory and memory controller, it
550 - *csrow* and *channel* - used when the memory controller
551 doesn't identify a single DIMM - e. g. in ``rankX`` dir;
552 - *branch*, *channel*, *slot* - typically used on FB-DIMM memory
554 - *channel*, *slot* - used on Nehalem and newer Intel drivers.
556 - ``dimm_mem_type`` - Memory Type attribute file
558 This attribute file will display what type of memory is currently
559 on this csrow. Normally, either buffered or unbuffered memory.
565 .. [#f5] On some systems, the memory controller doesn't have any logic
566 to identify the memory module. On such systems, the directory is called ``rankX`` and works on a similar way as the ``csrowX`` directories.
567 On modern Intel memory controllers, the memory controller identifies the
568 memory modules directly. On such systems, the directory is called ``dimmX``.
570 .. [#f6] There are also some ``power`` directories and ``subsystem``
571 symlinks inside the sysfs mapping that are automatically created by
572 the sysfs subsystem. Currently, they serve no purpose.
574 ``csrowX`` directories
575 ----------------------
577 When CONFIG_EDAC_LEGACY_SYSFS is enabled, sysfs will contain the ``csrowX``
578 directories. As this API doesn't work properly for Rambus, FB-DIMMs and
579 modern Intel Memory Controllers, this is being deprecated in favor of
580 ``dimmX`` directories.
582 In the ``csrowX`` directories are EDAC control and attribute files for
583 this ``X`` instance of csrow:
586 - ``ue_count`` - Total Uncorrectable Errors count attribute file
588 This attribute file displays the total count of uncorrectable
589 errors that have occurred on this csrow. If panic_on_ue is set
590 this counter will not have a chance to increment, since EDAC
591 will panic the system.
594 - ``ce_count`` - Total Correctable Errors count attribute file
596 This attribute file displays the total count of correctable
597 errors that have occurred on this csrow. This count is very
598 important to examine. CEs provide early indications that a
599 DIMM is beginning to fail. This count field should be
600 monitored for non-zero values and report such information
601 to the system administrator.
604 - ``size_mb`` - Total memory managed by this csrow attribute file
606 This attribute file displays, in count of megabytes, the memory
607 that this csrow contains.
610 - ``mem_type`` - Memory Type attribute file
612 This attribute file will display what type of memory is currently
613 on this csrow. Normally, either buffered or unbuffered memory.
620 - ``edac_mode`` - EDAC Mode of operation attribute file
622 This attribute file will display what type of Error detection
623 and correction is being utilized.
626 - ``dev_type`` - Device type attribute file
628 This attribute file will display what type of DRAM device is
629 being utilized on this DIMM.
638 - ``ch0_ce_count`` - Channel 0 CE Count attribute file
640 This attribute file will display the count of CEs on this
641 DIMM located in channel 0.
644 - ``ch0_ue_count`` - Channel 0 UE Count attribute file
646 This attribute file will display the count of UEs on this
647 DIMM located in channel 0.
650 - ``ch0_dimm_label`` - Channel 0 DIMM Label control file
653 This control file allows this DIMM to have a label assigned
654 to it. With this label in the module, when errors occur
655 the output can provide the DIMM label in the system log.
656 This becomes vital for panic events to isolate the
657 cause of the UE event.
659 DIMM Labels must be assigned after booting, with information
660 that correctly identifies the physical slot with its
661 silk screen label. This information is currently very
662 motherboard specific and determination of this information
663 must occur in userland at this time.
666 - ``ch1_ce_count`` - Channel 1 CE Count attribute file
669 This attribute file will display the count of CEs on this
670 DIMM located in channel 1.
673 - ``ch1_ue_count`` - Channel 1 UE Count attribute file
676 This attribute file will display the count of UEs on this
677 DIMM located in channel 0.
680 - ``ch1_dimm_label`` - Channel 1 DIMM Label control file
682 This control file allows this DIMM to have a label assigned
683 to it. With this label in the module, when errors occur
684 the output can provide the DIMM label in the system log.
685 This becomes vital for panic events to isolate the
686 cause of the UE event.
688 DIMM Labels must be assigned after booting, with information
689 that correctly identifies the physical slot with its
690 silk screen label. This information is currently very
691 motherboard specific and determination of this information
692 must occur in userland at this time.
698 If logging for UEs and CEs is enabled, then system logs will contain
699 information indicating that errors have been detected::
701 EDAC MC0: CE page 0x283, offset 0xce0, grain 8, syndrome 0x6ec3, row 0, channel 1 "DIMM_B1": amd76x_edac
702 EDAC MC0: CE page 0x1e5, offset 0xfb0, grain 8, syndrome 0xb741, row 0, channel 1 "DIMM_B1": amd76x_edac
705 The structure of the message is:
707 +---------------------------------------+-------------+
708 | Content | Example |
709 +=======================================+=============+
710 | The memory controller | MC0 |
711 +---------------------------------------+-------------+
713 +---------------------------------------+-------------+
714 | Memory page | 0x283 |
715 +---------------------------------------+-------------+
716 | Offset in the page | 0xce0 |
717 +---------------------------------------+-------------+
718 | The byte granularity | grain 8 |
719 | or resolution of the error | |
720 +---------------------------------------+-------------+
721 | The error syndrome | 0xb741 |
722 +---------------------------------------+-------------+
723 | Memory row | row 0 |
724 +---------------------------------------+-------------+
725 | Memory channel | channel 1 |
726 +---------------------------------------+-------------+
727 | DIMM label, if set prior | DIMM B1 |
728 +---------------------------------------+-------------+
729 | And then an optional, driver-specific | |
730 | message that may have additional | |
732 +---------------------------------------+-------------+
734 Both UEs and CEs with no info will lack all but memory controller, error
735 type, a notice of "no info" and then an optional, driver-specific error
739 PCI Bus Parity Detection
740 ------------------------
742 On Header Type 00 devices, the primary status is looked at for any
743 parity error regardless of whether parity is enabled on the device or
744 not. (The spec indicates parity is generated in some cases). On Header
745 Type 01 bridges, the secondary status register is also looked at to see
746 if parity occurred on the bus on the other side of the bridge.
752 Under ``/sys/devices/system/edac/pci`` are control and attribute files as
756 - ``check_pci_parity`` - Enable/Disable PCI Parity checking control file
758 This control file enables or disables the PCI Bus Parity scanning
759 operation. Writing a 1 to this file enables the scanning. Writing
760 a 0 to this file disables the scanning.
764 echo "1" >/sys/devices/system/edac/pci/check_pci_parity
768 echo "0" >/sys/devices/system/edac/pci/check_pci_parity
771 - ``pci_parity_count`` - Parity Count
773 This attribute file will display the number of parity errors that
780 - ``edac_mc_panic_on_ue`` - Panic on UE control file
782 An uncorrectable error will cause a machine panic. This is usually
783 desirable. It is a bad idea to continue when an uncorrectable error
784 occurs - it is indeterminate what was uncorrected and the operating
785 system context might be so mangled that continuing will lead to further
786 corruption. If the kernel has MCE configured, then EDAC will never
791 module/kernel parameter: edac_mc_panic_on_ue=[0|1]
795 echo "1" > /sys/module/edac_core/parameters/edac_mc_panic_on_ue
798 - ``edac_mc_log_ue`` - Log UE control file
801 Generate kernel messages describing uncorrectable errors. These errors
802 are reported through the system message log system. UE statistics
803 will be accumulated even when UE logging is disabled.
807 module/kernel parameter: edac_mc_log_ue=[0|1]
811 echo "1" > /sys/module/edac_core/parameters/edac_mc_log_ue
814 - ``edac_mc_log_ce`` - Log CE control file
817 Generate kernel messages describing correctable errors. These
818 errors are reported through the system message log system.
819 CE statistics will be accumulated even when CE logging is disabled.
823 module/kernel parameter: edac_mc_log_ce=[0|1]
827 echo "1" > /sys/module/edac_core/parameters/edac_mc_log_ce
830 - ``edac_mc_poll_msec`` - Polling period control file
833 The time period, in milliseconds, for polling for error information.
834 Too small a value wastes resources. Too large a value might delay
835 necessary handling of errors and might loose valuable information for
836 locating the error. 1000 milliseconds (once each second) is the current
837 default. Systems which require all the bandwidth they can get, may
842 module/kernel parameter: edac_mc_poll_msec=[0|1]
846 echo "1000" > /sys/module/edac_core/parameters/edac_mc_poll_msec
849 - ``panic_on_pci_parity`` - Panic on PCI PARITY Error
852 This control file enables or disables panicking when a parity
853 error has been detected.
856 module/kernel parameter::
858 edac_panic_on_pci_pe=[0|1]
862 echo "1" > /sys/module/edac_core/parameters/edac_panic_on_pci_pe
866 echo "0" > /sys/module/edac_core/parameters/edac_panic_on_pci_pe
873 In the header file, edac_pci.h, there is a series of edac_device structures
874 and APIs for the EDAC_DEVICE.
876 User space access to an edac_device is through the sysfs interface.
878 At the location ``/sys/devices/system/edac`` (sysfs) new edac_device devices
881 There is a three level tree beneath the above ``edac`` directory. For example,
882 the ``test_device_edac`` device (found at the http://bluesmoke.sourceforget.net
883 website) installs itself as::
885 /sys/devices/system/edac/test-instance
887 in this directory are various controls, a symlink and one or more ``instance``
890 The standard default controls are:
892 ============== =======================================================
893 log_ce boolean to log CE events
894 log_ue boolean to log UE events
895 panic_on_ue boolean to ``panic`` the system if an UE is encountered
896 (default off, can be set true via startup script)
897 poll_msec time period between POLL cycles for events
898 ============== =======================================================
900 The test_device_edac device adds at least one of its own custom control:
902 ============== ==================================================
903 test_bits which in the current test driver does nothing but
904 show how it is installed. A ported driver can
905 add one or more such controls and/or attributes
907 One out-of-tree driver uses controls here to allow
908 for ERROR INJECTION operations to hardware
910 ============== ==================================================
912 The symlink points to the 'struct dev' that is registered for this edac_device.
917 One or more instance directories are present. For the ``test_device_edac``
925 In this directory there are two default counter attributes, which are totals of
926 counter in deeper subdirectories.
928 ============== ====================================
929 ce_count total of CE events of subdirectories
930 ue_count total of UE events of subdirectories
931 ============== ====================================
936 At the lowest directory level is the ``block`` directory. There can be 0, 1
937 or more blocks specified in each instance:
943 In this directory the default attributes are:
945 ============== ================================================
946 ce_count which is counter of CE events for this ``block``
947 of hardware being monitored
948 ue_count which is counter of UE events for this ``block``
949 of hardware being monitored
950 ============== ================================================
953 The ``test_device_edac`` device adds 4 attributes and 1 control:
955 ================== ====================================================
956 test-block-bits-0 for every POLL cycle this counter
958 test-block-bits-1 every 10 cycles, this counter is bumped once,
959 and test-block-bits-0 is set to 0
960 test-block-bits-2 every 100 cycles, this counter is bumped once,
961 and test-block-bits-1 is set to 0
962 test-block-bits-3 every 1000 cycles, this counter is bumped once,
963 and test-block-bits-2 is set to 0
964 ================== ====================================================
967 ================== ====================================================
968 reset-counters writing ANY thing to this control will
969 reset all the above counters.
970 ================== ====================================================
973 Use of the ``test_device_edac`` driver should enable any others to create their own
974 unique drivers for their hardware systems.
976 The ``test_device_edac`` sample driver is located at the
977 http://bluesmoke.sourceforge.net project site for EDAC.
980 Usage of EDAC APIs on Nehalem and newer Intel CPUs
981 --------------------------------------------------
983 On older Intel architectures, the memory controller was part of the North
984 Bridge chipset. Nehalem, Sandy Bridge, Ivy Bridge, Haswell, Sky Lake and
985 newer Intel architectures integrated an enhanced version of the memory
986 controller (MC) inside the CPUs.
988 This chapter will cover the differences of the enhanced memory controllers
989 found on newer Intel CPUs, such as ``i7core_edac``, ``sb_edac`` and
990 ``sbx_edac`` drivers.
994 The Xeon E7 processor families use a separate chip for the memory
995 controller, called Intel Scalable Memory Buffer. This section doesn't
996 apply for such families.
998 1) There is one Memory Controller per Quick Patch Interconnect
999 (QPI). At the driver, the term "socket" means one QPI. This is
1000 associated with a physical CPU socket.
1002 Each MC have 3 physical read channels, 3 physical write channels and
1003 3 logic channels. The driver currently sees it as just 3 channels.
1004 Each channel can have up to 3 DIMMs.
1006 The minimum known unity is DIMMs. There are no information about csrows.
1007 As EDAC API maps the minimum unity is csrows, the driver sequentially
1008 maps channel/DIMM into different csrows.
1010 For example, supposing the following layout::
1012 Ch0 phy rd0, wr0 (0x063f4031): 2 ranks, UDIMMs
1013 dimm 0 1024 Mb offset: 0, bank: 8, rank: 1, row: 0x4000, col: 0x400
1014 dimm 1 1024 Mb offset: 4, bank: 8, rank: 1, row: 0x4000, col: 0x400
1015 Ch1 phy rd1, wr1 (0x063f4031): 2 ranks, UDIMMs
1016 dimm 0 1024 Mb offset: 0, bank: 8, rank: 1, row: 0x4000, col: 0x400
1017 Ch2 phy rd3, wr3 (0x063f4031): 2 ranks, UDIMMs
1018 dimm 0 1024 Mb offset: 0, bank: 8, rank: 1, row: 0x4000, col: 0x400
1020 The driver will map it as::
1022 csrow0: channel 0, dimm0
1023 csrow1: channel 0, dimm1
1024 csrow2: channel 1, dimm0
1025 csrow3: channel 2, dimm0
1027 exports one DIMM per csrow.
1029 Each QPI is exported as a different memory controller.
1031 2) The MC has the ability to inject errors to test drivers. The drivers
1032 implement this functionality via some error injection nodes:
1034 For injecting a memory error, there are some sysfs nodes, under
1035 ``/sys/devices/system/edac/mc/mc?/``:
1037 - ``inject_addrmatch/*``:
1038 Controls the error injection mask register. It is possible to specify
1039 several characteristics of the address to match an error code::
1041 dimm = the affected dimm. Numbers are relative to a channel;
1042 rank = the memory rank;
1043 channel = the channel that will generate an error;
1044 bank = the affected bank;
1045 page = the page address;
1046 column (or col) = the address column.
1048 each of the above values can be set to "any" to match any valid value.
1050 At driver init, all values are set to any.
1052 For example, to generate an error at rank 1 of dimm 2, for any channel,
1053 any bank, any page, any column::
1055 echo 2 >/sys/devices/system/edac/mc/mc0/inject_addrmatch/dimm
1056 echo 1 >/sys/devices/system/edac/mc/mc0/inject_addrmatch/rank
1058 To return to the default behaviour of matching any, you can do::
1060 echo any >/sys/devices/system/edac/mc/mc0/inject_addrmatch/dimm
1061 echo any >/sys/devices/system/edac/mc/mc0/inject_addrmatch/rank
1063 - ``inject_eccmask``:
1064 specifies what bits will have troubles,
1066 - ``inject_section``:
1067 specifies what ECC cache section will get the error::
1074 specifies the type of error, being a combination of the following bits::
1080 - ``inject_enable``:
1081 starts the error generation when something different than 0 is written.
1083 All inject vars can be read. root permission is needed for write.
1085 Datasheet states that the error will only be generated after a write on an
1086 address that matches inject_addrmatch. It seems, however, that reading will
1087 also produce an error.
1089 For example, the following code will generate an error for any write access
1090 at socket 0, on any DIMM/address on channel 2::
1092 echo 2 >/sys/devices/system/edac/mc/mc0/inject_addrmatch/channel
1093 echo 2 >/sys/devices/system/edac/mc/mc0/inject_type
1094 echo 64 >/sys/devices/system/edac/mc/mc0/inject_eccmask
1095 echo 3 >/sys/devices/system/edac/mc/mc0/inject_section
1096 echo 1 >/sys/devices/system/edac/mc/mc0/inject_enable
1097 dd if=/dev/mem of=/dev/null seek=16k bs=4k count=1 >& /dev/null
1099 For socket 1, it is needed to replace "mc0" by "mc1" at the above
1102 The generated error message will look like::
1104 EDAC MC0: UE row 0, channel-a= 0 channel-b= 0 labels "-": NON_FATAL (addr = 0x0075b980, socket=0, Dimm=0, Channel=2, syndrome=0x00000040, count=1, Err=8c0000400001009f:4000080482 (read error: read ECC error))
1106 3) Corrected Error memory register counters
1108 Those newer MCs have some registers to count memory errors. The driver
1109 uses those registers to report Corrected Errors on devices with Registered
1112 However, those counters don't work with Unregistered DIMM. As the chipset
1113 offers some counters that also work with UDIMMs (but with a worse level of
1114 granularity than the default ones), the driver exposes those registers for
1117 They can be read by looking at the contents of ``all_channel_counts/``::
1119 $ for i in /sys/devices/system/edac/mc/mc0/all_channel_counts/*; do echo $i; cat $i; done
1120 /sys/devices/system/edac/mc/mc0/all_channel_counts/udimm0
1122 /sys/devices/system/edac/mc/mc0/all_channel_counts/udimm1
1124 /sys/devices/system/edac/mc/mc0/all_channel_counts/udimm2
1127 What happens here is that errors on different csrows, but at the same
1128 dimm number will increment the same counter.
1129 So, in this memory mapping::
1131 csrow0: channel 0, dimm0
1132 csrow1: channel 0, dimm1
1133 csrow2: channel 1, dimm0
1134 csrow3: channel 2, dimm0
1136 The hardware will increment udimm0 for an error at the first dimm at either
1137 csrow0, csrow2 or csrow3;
1139 The hardware will increment udimm1 for an error at the second dimm at either
1140 csrow0, csrow2 or csrow3;
1142 The hardware will increment udimm2 for an error at the third dimm at either
1143 csrow0, csrow2 or csrow3;
1145 4) Standard error counters
1147 The standard error counters are generated when an mcelog error is received
1148 by the driver. Since, with UDIMM, this is counted by software, it is
1149 possible that some errors could be lost. With RDIMM's, they display the
1150 contents of the registers
1152 Reference documents used on ``amd64_edac``
1153 ------------------------------------------
1155 ``amd64_edac`` module is based on the following documents
1156 (available from http://support.amd.com/en-us/search/tech-docs):
1158 1. :Title: BIOS and Kernel Developer's Guide for AMD Athlon 64 and AMD
1160 :AMD publication #: 26094
1162 :Link: http://support.amd.com/TechDocs/26094.PDF
1164 2. :Title: BIOS and Kernel Developer's Guide for AMD NPT Family 0Fh
1166 :AMD publication #: 32559
1168 :Issue Date: May 2006
1169 :Link: http://support.amd.com/TechDocs/32559.pdf
1171 3. :Title: BIOS and Kernel Developer's Guide (BKDG) For AMD Family 10h
1173 :AMD publication #: 31116
1175 :Issue Date: September 07, 2007
1176 :Link: http://support.amd.com/TechDocs/31116.pdf
1178 4. :Title: BIOS and Kernel Developer's Guide (BKDG) for AMD Family 15h
1179 Models 30h-3Fh Processors
1180 :AMD publication #: 49125
1182 :Issue Date: 2/12/2015 (latest release)
1183 :Link: http://support.amd.com/TechDocs/49125_15h_Models_30h-3Fh_BKDG.pdf
1185 5. :Title: BIOS and Kernel Developer's Guide (BKDG) for AMD Family 15h
1186 Models 60h-6Fh Processors
1187 :AMD publication #: 50742
1189 :Issue Date: 7/23/2015 (latest release)
1190 :Link: http://support.amd.com/TechDocs/50742_15h_Models_60h-6Fh_BKDG.pdf
1192 6. :Title: BIOS and Kernel Developer's Guide (BKDG) for AMD Family 16h
1193 Models 00h-0Fh Processors
1194 :AMD publication #: 48751
1196 :Issue Date: 2/23/2015 (latest release)
1197 :Link: http://support.amd.com/TechDocs/48751_16h_bkdg.pdf
1202 * Written by Doug Thompson <dougthompson@xmission.com>
1205 - 17 Jul 2007 Updated
1207 * |copy| Mauro Carvalho Chehab
1209 - 05 Aug 2009 Nehalem interface
1210 - 26 Oct 2016 Converted to ReST and cleanups at the Nehalem section
1212 * EDAC authors/maintainers:
1214 - Doug Thompson, Dave Jiang, Dave Peterson et al,
1215 - Mauro Carvalho Chehab
1217 - original author: Thayne Harbaugh