1 .. include:: <isonum.txt>
3 =========================================================
4 DPAA2 (Data Path Acceleration Architecture Gen2) Overview
5 =========================================================
7 :Copyright: |copy| 2015 Freescale Semiconductor Inc.
8 :Copyright: |copy| 2018 NXP
10 This document provides an overview of the Freescale DPAA2 architecture
11 and how it is integrated into the Linux kernel.
16 DPAA2 is a hardware architecture designed for high-speeed network
17 packet processing. DPAA2 consists of sophisticated mechanisms for
18 processing Ethernet packets, queue management, buffer management,
19 autonomous L2 switching, virtual Ethernet bridging, and accelerator
20 (e.g. crypto) sharing.
22 A DPAA2 hardware component called the Management Complex (or MC) manages the
23 DPAA2 hardware resources. The MC provides an object-based abstraction for
24 software drivers to use the DPAA2 hardware.
25 The MC uses DPAA2 hardware resources such as queues, buffer pools, and
26 network ports to create functional objects/devices such as network
27 interfaces, an L2 switch, or accelerator instances.
28 The MC provides memory-mapped I/O command interfaces (MC portals)
29 which DPAA2 software drivers use to operate on DPAA2 objects.
31 The diagram below shows an overview of the DPAA2 resource management
34 +--------------------------------------+
38 +-----------------------------|--------+
40 | (create,discover,connect
44 +------------------------| mc portal |-+
46 | +- - - - - - - - - - - - -V- - -+ |
48 | | Management Complex (MC) | |
50 | +- - - - - - - - - - - - - - - -+ |
56 | -buffer pools -DPMCP |
57 | -Eth MACs/ports -DPIO |
58 | -network interface -DPNI |
60 | -queue portals -DPBP |
64 +--------------------------------------+
67 The MC mediates operations such as create, discover,
68 connect, configuration, and destroy. Fast-path operations
69 on data, such as packet transmit/receive, are not mediated by
70 the MC and are done directly using memory mapped regions in
73 Overview of DPAA2 Objects
74 =========================
76 The section provides a brief overview of some key DPAA2 objects.
77 A simple scenario is described illustrating the objects involved
78 in creating a network interfaces.
80 DPRC (Datapath Resource Container)
81 ----------------------------------
83 A DPRC is a container object that holds all the other
84 types of DPAA2 objects. In the example diagram below there
85 are 8 objects of 5 types (DPMCP, DPIO, DPBP, DPNI, and DPMAC)
90 +---------------------------------------------------------+
93 | +-------+ +-------+ +-------+ +-------+ +-------+ |
94 | | DPMCP | | DPIO | | DPBP | | DPNI | | DPMAC | |
95 | +-------+ +-------+ +-------+ +---+---+ +---+---+ |
96 | | DPMCP | | DPIO | |
97 | +-------+ +-------+ |
101 +---------------------------------------------------------+
103 From the point of view of an OS, a DPRC behaves similar to a plug and
104 play bus, like PCI. DPRC commands can be used to enumerate the contents
105 of the DPRC, discover the hardware objects present (including mappable
106 regions and interrupts).
112 +--+--------+-------+-------+-------+
114 DPMCP.1 DPIO.1 DPBP.1 DPNI.1 DPMAC.1
118 Hardware objects can be created and destroyed dynamically, providing
119 the ability to hot plug/unplug objects in and out of the DPRC.
121 A DPRC has a mappable MMIO region (an MC portal) that can be used
122 to send MC commands. It has an interrupt for status events (like
124 All objects in a container share the same hardware "isolation context".
125 This means that with respect to an IOMMU the isolation granularity
126 is at the DPRC (container) level, not at the individual object
129 DPRCs can be defined statically and populated with objects
130 via a config file passed to the MC when firmware starts it.
132 DPAA2 Objects for an Ethernet Network Interface
133 -----------------------------------------------
135 A typical Ethernet NIC is monolithic-- the NIC device contains TX/RX
136 queuing mechanisms, configuration mechanisms, buffer management,
137 physical ports, and interrupts. DPAA2 uses a more granular approach
138 utilizing multiple hardware objects. Each object provides specialized
139 functions. Groups of these objects are used by software to provide
140 Ethernet network interface functionality. This approach provides
141 efficient use of finite hardware resources, flexibility, and
142 performance advantages.
144 The diagram below shows the objects needed for a simple
145 network interface configuration on a system with 2 CPUs.
170 Below the objects are described. For each object a brief description
171 is provided along with a summary of the kinds of operations the object
172 supports and a summary of key resources of the object (MMIO regions
175 DPMAC (Datapath Ethernet MAC)
176 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
177 Represents an Ethernet MAC, a hardware device that connects to an Ethernet
178 PHY and allows physical transmission and reception of Ethernet frames.
181 - IRQs: DPNI link change
182 - commands: set link up/down, link config, get stats,
183 IRQ config, enable, reset
185 DPNI (Datapath Network Interface)
186 Contains TX/RX queues, network interface configuration, and RX buffer pool
187 configuration mechanisms. The TX/RX queues are in memory and are identified
192 - commands: port config, offload config, queue config,
193 parse/classify config, IRQ config, enable, reset
197 Provides interfaces to enqueue and dequeue
198 packets and do hardware buffer pool management operations. The DPAA2
199 architecture separates the mechanism to access queues (the DPIO object)
200 from the queues themselves. The DPIO provides an MMIO interface to
201 enqueue/dequeue packets. To enqueue something a descriptor is written
202 to the DPIO MMIO region, which includes the target queue number.
203 There will typically be one DPIO assigned to each CPU. This allows all
204 CPUs to simultaneously perform enqueue/dequeued operations. DPIOs are
205 expected to be shared by different DPAA2 drivers.
207 - MMIO regions: queue operations, buffer management
208 - IRQs: data availability, congestion notification, buffer
210 - commands: IRQ config, enable, reset
212 DPBP (Datapath Buffer Pool)
213 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
214 Represents a hardware buffer pool.
218 - commands: enable, reset
220 DPMCP (Datapath MC Portal)
221 ~~~~~~~~~~~~~~~~~~~~~~~~~~
222 Provides an MC command portal.
223 Used by drivers to send commands to the MC to manage
226 - MMIO regions: MC command portal
227 - IRQs: command completion
228 - commands: IRQ config, enable, reset
232 Some objects have explicit relationships that must
237 - DPNI <--> L2-switch-port
239 A DPNI must be connected to something such as a DPMAC,
240 another DPNI, or L2 switch port. The DPNI connection
241 is made via a DPRC command.
253 A network interface requires a 'buffer pool' (DPBP
254 object) which provides a list of pointers to memory
255 where received Ethernet data is to be copied. The
256 Ethernet driver configures the DPBPs associated with
257 the network interface.
261 All interrupts generated by DPAA2 objects are message
262 interrupts. At the hardware level message interrupts
263 generated by devices will normally have 3 components--
264 1) a non-spoofable 'device-id' expressed on the hardware
265 bus, 2) an address, 3) a data value.
267 In the case of DPAA2 devices/objects, all objects in the
268 same container/DPRC share the same 'device-id'.
269 For ARM-based SoC this is the same as the stream ID.
272 DPAA2 Linux Drivers Overview
273 ============================
275 This section provides an overview of the Linux kernel drivers for
276 DPAA2-- 1) the bus driver and associated "DPAA2 infrastructure"
277 drivers and 2) functional object drivers (such as Ethernet).
279 As described previously, a DPRC is a container that holds the other
280 types of DPAA2 objects. It is functionally similar to a plug-and-play
282 Each object in the DPRC is a Linux "device" and is bound to a driver.
283 The diagram below shows the Linux drivers involved in a networking
284 scenario and the objects bound to each driver. A brief description
285 of each driver follows.
292 +------------+ +------------+
293 | Allocator |. . . . . . . | Ethernet |
294 |(DPMCP,DPBP)| | (DPNI) |
295 +-.----------+ +---+---+----+
297 . . <data avail, | | <enqueue,
298 . . tx confirm> | | dequeue>
299 +-------------+ . | |
300 | DPRC driver | . +---+---V----+ +---------+
301 | (DPRC) | . . . . . .| DPIO driver| | MAC |
302 +----------+--+ | (DPIO) | | (DPMAC) |
303 | +------+-----+ +-----+---+
304 |<dev add/remove> | |
306 +--------+----------+ | +--+---+
307 | MC-bus driver | | | PHY |
309 | /bus/fsl-mc | | +--+---+
310 +-------------------+ | |
312 ========================= HARDWARE =========|=================|======
320 ============================================|========================
322 A brief description of each driver is provided below.
326 The MC-bus driver is a platform driver and is probed from a
327 node in the device tree (compatible "fsl,qoriq-mc") passed in by boot
328 firmware. It is responsible for bootstrapping the DPAA2 kernel
330 Key functions include:
332 - registering a new bus type named "fsl-mc" with the kernel,
333 and implementing bus call-backs (e.g. match/uevent/dev_groups)
334 - implementing APIs for DPAA2 driver registration and for device
336 - creates an MSI IRQ domain
337 - doing a 'device add' to expose the 'root' DPRC, in turn triggering
338 a bind of the root DPRC to the DPRC driver
340 The binding for the MC-bus device-tree node can be consulted at
341 *Documentation/devicetree/bindings/misc/fsl,qoriq-mc.txt*.
342 The sysfs bind/unbind interfaces for the MC-bus can be consulted at
343 *Documentation/ABI/testing/sysfs-bus-fsl-mc*.
347 The DPRC driver is bound to DPRC objects and does runtime management
348 of a bus instance. It performs the initial bus scan of the DPRC
349 and handles interrupts for container events such as hot plug by
350 re-scanning the DPRC.
354 Certain objects such as DPMCP and DPBP are generic and fungible,
355 and are intended to be used by other drivers. For example,
356 the DPAA2 Ethernet driver needs:
358 - DPMCPs to send MC commands, to configure network interfaces
359 - DPBPs for network buffer pools
361 The allocator driver registers for these allocatable object types
362 and those objects are bound to the allocator when the bus is probed.
363 The allocator maintains a pool of objects that are available for
364 allocation by other DPAA2 drivers.
368 The DPIO driver is bound to DPIO objects and provides services that allow
369 other drivers such as the Ethernet driver to enqueue and dequeue data for
370 their respective objects.
371 Key services include:
373 - data availability notifications
374 - hardware queuing operations (enqueue and dequeue of data)
375 - hardware buffer pool management
377 To transmit a packet the Ethernet driver puts data on a queue and
378 invokes a DPIO API. For receive, the Ethernet driver registers
379 a data availability notification callback. To dequeue a packet
381 There is typically one DPIO object per physical CPU for optimum
382 performance, allowing different CPUs to simultaneously enqueue
385 The DPIO driver operates on behalf of all DPAA2 drivers
386 active in the kernel-- Ethernet, crypto, compression,
391 The Ethernet driver is bound to a DPNI and implements the kernel
392 interfaces needed to connect the DPAA2 network interface to
394 Each DPNI corresponds to a Linux network interface.
398 An Ethernet PHY is an off-chip, board specific component and is managed
399 by the appropriate PHY driver via an mdio bus. The MAC driver
400 plays a role of being a proxy between the PHY driver and the
401 MC. It does this proxy via the MC commands to a DPMAC object.
402 If the PHY driver signals a link change, the MAC driver notifies
403 the MC via a DPMAC command. If a network interface is brought
404 up or down, the MC notifies the DPMAC driver via an interrupt and
405 the driver can take appropriate action.