6 Current document maintainer:
7 Linas Vepstas <linasvepstas@gmail.com>
8 updated by Richard Lary <rlary@us.ibm.com>
9 and Mike Mason <mmlnx@us.ibm.com> on 27-Jul-2009
12 Many PCI bus controllers are able to detect a variety of hardware
13 PCI errors on the bus, such as parity errors on the data and address
14 buses, as well as SERR and PERR errors. Some of the more advanced
15 chipsets are able to deal with these errors; these include PCI-E chipsets,
16 and the PCI-host bridges found on IBM Power4, Power5 and Power6-based
17 pSeries boxes. A typical action taken is to disconnect the affected device,
18 halting all I/O to it. The goal of a disconnection is to avoid system
19 corruption; for example, to halt system memory corruption due to DMA's
20 to "wild" addresses. Typically, a reconnection mechanism is also
21 offered, so that the affected PCI device(s) are reset and put back
22 into working condition. The reset phase requires coordination
23 between the affected device drivers and the PCI controller chip.
24 This document describes a generic API for notifying device drivers
25 of a bus disconnection, and then performing error recovery.
26 This API is currently implemented in the 2.6.16 and later kernels.
28 Reporting and recovery is performed in several steps. First, when
29 a PCI hardware error has resulted in a bus disconnect, that event
30 is reported as soon as possible to all affected device drivers,
31 including multiple instances of a device driver on multi-function
32 cards. This allows device drivers to avoid deadlocking in spinloops,
33 waiting for some i/o-space register to change, when it never will.
34 It also gives the drivers a chance to defer incoming I/O as
37 Next, recovery is performed in several stages. Most of the complexity
38 is forced by the need to handle multi-function devices, that is,
39 devices that have multiple device drivers associated with them.
40 In the first stage, each driver is allowed to indicate what type
41 of reset it desires, the choices being a simple re-enabling of I/O
42 or requesting a slot reset.
44 If any driver requests a slot reset, that is what will be done.
46 After a reset and/or a re-enabling of I/O, all drivers are
47 again notified, so that they may then perform any device setup/config
48 that may be required. After these have all completed, a final
49 "resume normal operations" event is sent out.
51 The biggest reason for choosing a kernel-based implementation rather
52 than a user-space implementation was the need to deal with bus
53 disconnects of PCI devices attached to storage media, and, in particular,
54 disconnects from devices holding the root file system. If the root
55 file system is disconnected, a user-space mechanism would have to go
56 through a large number of contortions to complete recovery. Almost all
57 of the current Linux file systems are not tolerant of disconnection
58 from/reconnection to their underlying block device. By contrast,
59 bus errors are easy to manage in the device driver. Indeed, most
60 device drivers already handle very similar recovery procedures;
61 for example, the SCSI-generic layer already provides significant
62 mechanisms for dealing with SCSI bus errors and SCSI bus resets.
67 Design and implementation details below, based on a chain of
68 public email discussions with Ben Herrenschmidt, circa 5 April 2005.
70 The error recovery API support is exposed to the driver in the form of
71 a structure of function pointers pointed to by a new field in struct
72 pci_driver. A driver that fails to provide the structure is "non-aware",
73 and the actual recovery steps taken are platform dependent. The
74 arch/powerpc implementation will simulate a PCI hotplug remove/add.
76 This structure has the form:
77 struct pci_error_handlers
79 int (*error_detected)(struct pci_dev *dev, enum pci_channel_state);
80 int (*mmio_enabled)(struct pci_dev *dev);
81 int (*slot_reset)(struct pci_dev *dev);
82 void (*resume)(struct pci_dev *dev);
85 The possible channel states are:
86 enum pci_channel_state {
87 pci_channel_io_normal, /* I/O channel is in normal state */
88 pci_channel_io_frozen, /* I/O to channel is blocked */
89 pci_channel_io_perm_failure, /* PCI card is dead */
92 Possible return values are:
94 PCI_ERS_RESULT_NONE, /* no result/none/not supported in device driver */
95 PCI_ERS_RESULT_CAN_RECOVER, /* Device driver can recover without slot reset */
96 PCI_ERS_RESULT_NEED_RESET, /* Device driver wants slot to be reset. */
97 PCI_ERS_RESULT_DISCONNECT, /* Device has completely failed, is unrecoverable */
98 PCI_ERS_RESULT_RECOVERED, /* Device driver is fully recovered and operational */
101 A driver does not have to implement all of these callbacks; however,
102 if it implements any, it must implement error_detected(). If a callback
103 is not implemented, the corresponding feature is considered unsupported.
104 For example, if mmio_enabled() and resume() aren't there, then it
105 is assumed that the driver is not doing any direct recovery and requires
106 a slot reset. Typically a driver will want to know about
109 The actual steps taken by a platform to recover from a PCI error
110 event will be platform-dependent, but will follow the general
111 sequence described below.
113 STEP 0: Error Event: ERR_NONFATAL
115 A PCI bus error is detected by the PCI hardware. On powerpc, the slot
116 is isolated, in that all I/O is blocked: all reads return 0xffffffff,
117 all writes are ignored.
122 Platform calls the error_detected() callback on every instance of
123 every driver affected by the error.
125 At this point, the device might not be accessible anymore, depending on
126 the platform (the slot will be isolated on powerpc). The driver may
127 already have "noticed" the error because of a failing I/O, but this
128 is the proper "synchronization point", that is, it gives the driver
129 a chance to cleanup, waiting for pending stuff (timers, whatever, etc...)
130 to complete; it can take semaphores, schedule, etc... everything but
131 touch the device. Within this function and after it returns, the driver
132 shouldn't do any new IOs. Called in task context. This is sort of a
133 "quiesce" point. See note about interrupts at the end of this doc.
135 All drivers participating in this system must implement this call.
136 The driver must return one of the following result codes:
137 - PCI_ERS_RESULT_CAN_RECOVER:
138 Driver returns this if it thinks it might be able to recover
139 the HW by just banging IOs or if it wants to be given
140 a chance to extract some diagnostic information (see
142 - PCI_ERS_RESULT_NEED_RESET:
143 Driver returns this if it can't recover without a
145 - PCI_ERS_RESULT_DISCONNECT:
146 Driver returns this if it doesn't want to recover at all.
148 The next step taken will depend on the result codes returned by the
151 If all drivers on the segment/slot return PCI_ERS_RESULT_CAN_RECOVER,
152 then the platform should re-enable IOs on the slot (or do nothing in
153 particular, if the platform doesn't isolate slots), and recovery
154 proceeds to STEP 2 (MMIO Enable).
156 If any driver requested a slot reset (by returning PCI_ERS_RESULT_NEED_RESET),
157 then recovery proceeds to STEP 4 (Slot Reset).
159 If the platform is unable to recover the slot, the next step
160 is STEP 6 (Permanent Failure).
162 >>> The current powerpc implementation assumes that a device driver will
163 >>> *not* schedule or semaphore in this routine; the current powerpc
164 >>> implementation uses one kernel thread to notify all devices;
165 >>> thus, if one device sleeps/schedules, all devices are affected.
166 >>> Doing better requires complex multi-threaded logic in the error
167 >>> recovery implementation (e.g. waiting for all notification threads
168 >>> to "join" before proceeding with recovery.) This seems excessively
169 >>> complex and not worth implementing.
171 >>> The current powerpc implementation doesn't much care if the device
172 >>> attempts I/O at this point, or not. I/O's will fail, returning
173 >>> a value of 0xff on read, and writes will be dropped. If more than
174 >>> EEH_MAX_FAILS I/O's are attempted to a frozen adapter, EEH
175 >>> assumes that the device driver has gone into an infinite loop
176 >>> and prints an error to syslog. A reboot is then required to
177 >>> get the device working again.
181 The platform re-enables MMIO to the device (but typically not the
182 DMA), and then calls the mmio_enabled() callback on all affected
185 This is the "early recovery" call. IOs are allowed again, but DMA is
186 not, with some restrictions. This is NOT a callback for the driver to
187 start operations again, only to peek/poke at the device, extract diagnostic
188 information, if any, and eventually do things like trigger a device local
189 reset or some such, but not restart operations. This callback is made if
190 all drivers on a segment agree that they can try to recover and if no automatic
191 link reset was performed by the HW. If the platform can't just re-enable IOs
192 without a slot reset or a link reset, it will not call this callback, and
193 instead will have gone directly to STEP 3 (Link Reset) or STEP 4 (Slot Reset)
195 >>> The following is proposed; no platform implements this yet:
196 >>> Proposal: All I/O's should be done _synchronously_ from within
197 >>> this callback, errors triggered by them will be returned via
198 >>> the normal pci_check_whatever() API, no new error_detected()
199 >>> callback will be issued due to an error happening here. However,
200 >>> such an error might cause IOs to be re-blocked for the whole
201 >>> segment, and thus invalidate the recovery that other devices
202 >>> on the same segment might have done, forcing the whole segment
203 >>> into one of the next states, that is, link reset or slot reset.
205 The driver should return one of the following result codes:
206 - PCI_ERS_RESULT_RECOVERED
207 Driver returns this if it thinks the device is fully
208 functional and thinks it is ready to start
209 normal driver operations again. There is no
210 guarantee that the driver will actually be
211 allowed to proceed, as another driver on the
212 same segment might have failed and thus triggered a
213 slot reset on platforms that support it.
215 - PCI_ERS_RESULT_NEED_RESET
216 Driver returns this if it thinks the device is not
217 recoverable in its current state and it needs a slot
220 - PCI_ERS_RESULT_DISCONNECT
221 Same as above. Total failure, no recovery even after
222 reset driver dead. (To be defined more precisely)
224 The next step taken depends on the results returned by the drivers.
225 If all drivers returned PCI_ERS_RESULT_RECOVERED, then the platform
226 proceeds to either STEP3 (Link Reset) or to STEP 5 (Resume Operations).
228 If any driver returned PCI_ERS_RESULT_NEED_RESET, then the platform
229 proceeds to STEP 4 (Slot Reset)
234 In response to a return value of PCI_ERS_RESULT_NEED_RESET, the
235 the platform will perform a slot reset on the requesting PCI device(s).
236 The actual steps taken by a platform to perform a slot reset
237 will be platform-dependent. Upon completion of slot reset, the
238 platform will call the device slot_reset() callback.
240 Powerpc platforms implement two levels of slot reset:
241 soft reset(default) and fundamental(optional) reset.
243 Powerpc soft reset consists of asserting the adapter #RST line and then
244 restoring the PCI BAR's and PCI configuration header to a state
245 that is equivalent to what it would be after a fresh system
246 power-on followed by power-on BIOS/system firmware initialization.
247 Soft reset is also known as hot-reset.
249 Powerpc fundamental reset is supported by PCI Express cards only
250 and results in device's state machines, hardware logic, port states and
251 configuration registers to initialize to their default conditions.
253 For most PCI devices, a soft reset will be sufficient for recovery.
254 Optional fundamental reset is provided to support a limited number
255 of PCI Express devices for which a soft reset is not sufficient
258 If the platform supports PCI hotplug, then the reset might be
259 performed by toggling the slot electrical power off/on.
261 It is important for the platform to restore the PCI config space
262 to the "fresh poweron" state, rather than the "last state". After
263 a slot reset, the device driver will almost always use its standard
264 device initialization routines, and an unusual config space setup
265 may result in hung devices, kernel panics, or silent data corruption.
267 This call gives drivers the chance to re-initialize the hardware
268 (re-download firmware, etc.). At this point, the driver may assume
269 that the card is in a fresh state and is fully functional. The slot
270 is unfrozen and the driver has full access to PCI config space,
271 memory mapped I/O space and DMA. Interrupts (Legacy, MSI, or MSI-X)
272 will also be available.
274 Drivers should not restart normal I/O processing operations
275 at this point. If all device drivers report success on this
276 callback, the platform will call resume() to complete the sequence,
277 and let the driver restart normal I/O processing.
279 A driver can still return a critical failure for this function if
280 it can't get the device operational after reset. If the platform
281 previously tried a soft reset, it might now try a hard reset (power
282 cycle) and then call slot_reset() again. It the device still can't
283 be recovered, there is nothing more that can be done; the platform
284 will typically report a "permanent failure" in such a case. The
285 device will be considered "dead" in this case.
287 Drivers for multi-function cards will need to coordinate among
288 themselves as to which driver instance will perform any "one-shot"
289 or global device initialization. For example, the Symbios sym53cxx2
290 driver performs device init only from PCI function 0:
292 + if (PCI_FUNC(pdev->devfn) == 0)
293 + sym_reset_scsi_bus(np, 0);
296 - PCI_ERS_RESULT_DISCONNECT
299 Drivers for PCI Express cards that require a fundamental reset must
300 set the needs_freset bit in the pci_dev structure in their probe function.
301 For example, the QLogic qla2xxx driver sets the needs_freset bit for certain
304 + /* Set EEH reset type to fundamental if required by hba */
305 + if (IS_QLA24XX(ha) || IS_QLA25XX(ha) || IS_QLA81XX(ha))
306 + pdev->needs_freset = 1;
309 Platform proceeds either to STEP 5 (Resume Operations) or STEP 6 (Permanent
312 >>> The current powerpc implementation does not try a power-cycle
313 >>> reset if the driver returned PCI_ERS_RESULT_DISCONNECT.
314 >>> However, it probably should.
317 STEP 4: Resume Operations
318 -------------------------
319 The platform will call the resume() callback on all affected device
320 drivers if all drivers on the segment have returned
321 PCI_ERS_RESULT_RECOVERED from one of the 3 previous callbacks.
322 The goal of this callback is to tell the driver to restart activity,
323 that everything is back and running. This callback does not return
326 At this point, if a new error happens, the platform will restart
327 a new error recovery sequence.
329 STEP 5: Permanent Failure
330 -------------------------
331 A "permanent failure" has occurred, and the platform cannot recover
332 the device. The platform will call error_detected() with a
333 pci_channel_state value of pci_channel_io_perm_failure.
335 The device driver should, at this point, assume the worst. It should
336 cancel all pending I/O, refuse all new I/O, returning -EIO to
337 higher layers. The device driver should then clean up all of its
338 memory and remove itself from kernel operations, much as it would
339 during system shutdown.
341 The platform will typically notify the system operator of the
342 permanent failure in some way. If the device is hotplug-capable,
343 the operator will probably want to remove and replace the device.
344 Note, however, not all failures are truly "permanent". Some are
345 caused by over-heating, some by a poorly seated card. Many
346 PCI error events are caused by software bugs, e.g. DMA's to
347 wild addresses or bogus split transactions due to programming
348 errors. See the discussion in powerpc/eeh-pci-error-recovery.txt
349 for additional detail on real-life experience of the causes of
352 STEP 0: Error Event: ERR_FATAL
354 PCI bus error is detected by the PCI hardware. On powerpc, the slot is
355 isolated, in that all I/O is blocked: all reads return 0xffffffff, all
358 STEP 1: Remove devices
360 Platform removes the devices depending on the error agent, it could be
361 this port for all subordinates or upstream component (likely downstream
366 The platform resets the link. This is a PCI-Express specific step and is
367 done whenever a fatal error has been detected that can be "solved" by
370 STEP 3: Re-enumerate the devices
372 Initiates the re-enumeration.
374 Conclusion; General Remarks
375 ---------------------------
376 The way the callbacks are called is platform policy. A platform with
377 no slot reset capability may want to just "ignore" drivers that can't
378 recover (disconnect them) and try to let other cards on the same segment
379 recover. Keep in mind that in most real life cases, though, there will
380 be only one driver per segment.
382 Now, a note about interrupts. If you get an interrupt and your
383 device is dead or has been isolated, there is a problem :)
384 The current policy is to turn this into a platform policy.
385 That is, the recovery API only requires that:
387 - There is no guarantee that interrupt delivery can proceed from any
388 device on the segment starting from the error detection and until the
389 slot_reset callback is called, at which point interrupts are expected
390 to be fully operational.
392 - There is no guarantee that interrupt delivery is stopped, that is,
393 a driver that gets an interrupt after detecting an error, or that detects
394 an error within the interrupt handler such that it prevents proper
395 ack'ing of the interrupt (and thus removal of the source) should just
396 return IRQ_NOTHANDLED. It's up to the platform to deal with that
397 condition, typically by masking the IRQ source during the duration of
398 the error handling. It is expected that the platform "knows" which
399 interrupts are routed to error-management capable slots and can deal
400 with temporarily disabling that IRQ number during error processing (this
401 isn't terribly complex). That means some IRQ latency for other devices
402 sharing the interrupt, but there is simply no other way. High end
403 platforms aren't supposed to share interrupts between many devices
406 >>> Implementation details for the powerpc platform are discussed in
407 >>> the file Documentation/powerpc/eeh-pci-error-recovery.txt
409 >>> As of this writing, there is a growing list of device drivers with
410 >>> patches implementing error recovery. Not all of these patches are in
411 >>> mainline yet. These may be used as "examples":
414 >>> drivers/scsi/sym53c8xx_2
415 >>> drivers/scsi/qla2xxx
416 >>> drivers/scsi/lpfc
417 >>> drivers/next/bnx2.c
418 >>> drivers/next/e100.c
419 >>> drivers/net/e1000
420 >>> drivers/net/e1000e
422 >>> drivers/net/ixgbe
423 >>> drivers/net/cxgb3
424 >>> drivers/net/s2io.c