1 The QorIQ DPAA Ethernet Driver
2 ==============================
5 Madalin Bucur <madalin.bucur@nxp.com>
6 Camelia Groza <camelia.groza@nxp.com>
11 - DPAA Ethernet Overview
12 - DPAA Ethernet Supported SoCs
13 - Configuring DPAA Ethernet in your kernel
14 - DPAA Ethernet Frame Processing
15 - DPAA Ethernet Features
16 - DPAA IRQ Affinity and Receive Side Scaling
19 DPAA Ethernet Overview
20 ======================
22 DPAA stands for Data Path Acceleration Architecture and it is a
23 set of networking acceleration IPs that are available on several
24 generations of SoCs, both on PowerPC and ARM64.
26 The Freescale DPAA architecture consists of a series of hardware blocks
27 that support Ethernet connectivity. The Ethernet driver depends upon the
28 following drivers in the Linux kernel:
30 - Peripheral Access Memory Unit (PAMU) (* needed only for PPC platforms)
32 - Frame Manager (FMan)
33 drivers/net/ethernet/freescale/fman
34 - Queue Manager (QMan), Buffer Manager (BMan)
37 A simplified view of the dpaa_eth interfaces mapped to FMan MACs:
39 dpaa_eth /eth0\ ... /ethN\
41 ------------- ---- ----------- ---- -------------
42 -Ports / Tx Rx \ ... / Tx Rx \
44 -MACs | MAC0 | | MACN |
45 / dtsec0 \ ... / dtsecN \ (or tgec)
47 --------- -------------- --- -------------- ---------
48 FMan, FMan Port, FMan SP, FMan MURAM drivers
49 ---------------------------------------------------------
50 FMan HW blocks: MURAM, MACs, Ports, SP
51 ---------------------------------------------------------
53 The dpaa_eth relation to the QMan, BMan and FMan:
54 ________________________________
57 --------- -^- -^- -^- --- ---------
58 QMan driver / \ / \ / \ \ / | BMan |
59 |Rx | |Rx | |Tx | |Tx | | driver |
60 --------- |Dfl| |Err| |Cnf| |FQs| | |
61 QMan HW |FQ | |FQ | |FQs| | | | |
63 --------- --- --- --- -v- ---------
65 | FMan HW FMan BMI | BMan HW |
66 ----------------------- --------
68 where the acronyms used above (and in the code) are:
69 DPAA = Data Path Acceleration Architecture
70 FMan = DPAA Frame Manager
71 QMan = DPAA Queue Manager
72 BMan = DPAA Buffers Manager
73 QMI = QMan interface in FMan
74 BMI = BMan interface in FMan
75 FMan SP = FMan Storage Profiles
76 MURAM = Multi-user RAM in FMan
78 Rx Dfl FQ = default reception FQ
79 Rx Err FQ = Rx error frames FQ
80 Tx Cnf FQ = Tx confirmation FQs
81 Tx FQs = transmission frame queues
82 dtsec = datapath three speed Ethernet controller (10/100/1000 Mbps)
83 tgec = ten gigabit Ethernet controller (10 Gbps)
84 memac = multirate Ethernet MAC (10/100/1000/10000)
86 DPAA Ethernet Supported SoCs
87 ============================
89 The DPAA drivers enable the Ethernet controllers present on the following SoCs:
110 Configuring DPAA Ethernet in your kernel
111 ========================================
113 To enable the DPAA Ethernet driver, the following Kconfig options are required:
115 # common for arch/arm64 and arch/powerpc platforms
118 CONFIG_FSL_DPAA_ETH=y
119 CONFIG_FSL_XGMAC_MDIO=y
121 # for arch/powerpc only
124 # common options needed for the PHYs used on the RDBs
127 CONFIG_AQUANTIA_PHY=y
129 DPAA Ethernet Frame Processing
130 ==============================
132 On Rx, buffers for the incoming frames are retrieved from the buffers found
133 in the dedicated interface buffer pool. The driver initializes and seeds these
134 with one page buffers.
136 On Tx, all transmitted frames are returned to the driver through Tx
137 confirmation frame queues. The driver is then responsible for freeing the
138 buffers. In order to do this properly, a backpointer is added to the buffer
139 before transmission that points to the skb. When the buffer returns to the
140 driver on a confirmation FQ, the skb can be correctly consumed.
142 DPAA Ethernet Features
143 ======================
145 Currently the DPAA Ethernet driver enables the basic features required for
146 a Linux Ethernet driver. The support for advanced features will be added
149 The driver has Rx and Tx checksum offloading for UDP and TCP. Currently the Rx
150 checksum offload feature is enabled by default and cannot be controlled through
151 ethtool. Also, rx-flow-hash and rx-hashing was added. The addition of RSS
152 provides a big performance boost for the forwarding scenarios, allowing
153 different traffic flows received by one interface to be processed by different
156 The driver has support for multiple prioritized Tx traffic classes. Priorities
157 range from 0 (lowest) to 3 (highest). These are mapped to HW workqueues with
158 strict priority levels. Each traffic class contains NR_CPU TX queues. By
159 default, only one traffic class is enabled and the lowest priority Tx queues
160 are used. Higher priority traffic classes can be enabled with the mqprio
161 qdisc. For example, all four traffic classes are enabled on an interface with
162 the following command. Furthermore, skb priority levels are mapped to traffic
165 * priorities 0 to 3 - traffic class 0 (low priority)
166 * priorities 4 to 7 - traffic class 1 (medium-low priority)
167 * priorities 8 to 11 - traffic class 2 (medium-high priority)
168 * priorities 12 to 15 - traffic class 3 (high priority)
170 tc qdisc add dev <int> root handle 1: \
171 mqprio num_tc 4 map 0 0 0 0 1 1 1 1 2 2 2 2 3 3 3 3 hw 1
173 DPAA IRQ Affinity and Receive Side Scaling
174 ==========================================
176 Traffic coming on the DPAA Rx queues or on the DPAA Tx confirmation
177 queues is seen by the CPU as ingress traffic on a certain portal.
178 The DPAA QMan portal interrupts are affined each to a certain CPU.
179 The same portal interrupt services all the QMan portal consumers.
181 By default the DPAA Ethernet driver enables RSS, making use of the
182 DPAA FMan Parser and Keygen blocks to distribute traffic on 128
183 hardware frame queues using a hash on IP v4/v6 source and destination
184 and L4 source and destination ports, in present in the received frame.
185 When RSS is disabled, all traffic received by a certain interface is
186 received on the default Rx frame queue. The default DPAA Rx frame
187 queues are configured to put the received traffic into a pool channel
188 that allows any available CPU portal to dequeue the ingress traffic.
189 The default frame queues have the HOLDACTIVE option set, ensuring that
190 traffic bursts from a certain queue are serviced by the same CPU.
191 This ensures a very low rate of frame reordering. A drawback of this
192 is that only one CPU at a time can service the traffic received by a
193 certain interface when RSS is not enabled.
195 To implement RSS, the DPAA Ethernet driver allocates an extra set of
196 128 Rx frame queues that are configured to dedicated channels, in a
197 round-robin manner. The mapping of the frame queues to CPUs is now
198 hardcoded, there is no indirection table to move traffic for a certain
199 FQ (hash result) to another CPU. The ingress traffic arriving on one
200 of these frame queues will arrive at the same portal and will always
201 be processed by the same CPU. This ensures intra-flow order preservation
202 and workload distribution for multiple traffic flows.
204 RSS can be turned off for a certain interface using ethtool, i.e.
206 # ethtool -N fm1-mac9 rx-flow-hash tcp4 ""
208 To turn it back on, one needs to set rx-flow-hash for tcp4/6 or udp4/6:
210 # ethtool -N fm1-mac9 rx-flow-hash udp4 sfdn
212 There is no independent control for individual protocols, any command
213 run for one of tcp4|udp4|ah4|esp4|sctp4|tcp6|udp6|ah6|esp6|sctp6 is
214 going to control the rx-flow-hashing for all protocols on that interface.
216 Besides using the FMan Keygen computed hash for spreading traffic on the
217 128 Rx FQs, the DPAA Ethernet driver also sets the skb hash value when
218 the NETIF_F_RXHASH feature is on (active by default). This can be turned
219 on or off through ethtool, i.e.:
221 # ethtool -K fm1-mac9 rx-hashing off
222 # ethtool -k fm1-mac9 | grep hash
224 # ethtool -K fm1-mac9 rx-hashing on
227 # ethtool -k fm1-mac9 | grep hash
230 Please note that Rx hashing depends upon the rx-flow-hashing being on
231 for that interface - turning off rx-flow-hashing will also disable the
232 rx-hashing (without ethtool reporting it as off as that depends on the
233 NETIF_F_RXHASH feature flag).
238 The following statistics are exported for each interface through ethtool:
240 - interrupt count per CPU
241 - Rx packets count per CPU
242 - Tx packets count per CPU
243 - Tx confirmed packets count per CPU
244 - Tx S/G frames count per CPU
245 - Tx error count per CPU
246 - Rx error count per CPU
247 - Rx error count per type
248 - congestion related statistics:
250 - time spent in congestion
251 - number of time the device entered congestion
252 - dropped packets count per cause
254 The driver also exports the following information in sysfs:
256 - the FQ IDs for each FQ type
257 /sys/devices/platform/soc/<addr>.fman/<addr>.ethernet/dpaa-ethernet.<id>/net/fm<nr>-mac<nr>/fqids
259 - the ID of the buffer pool in use
260 /sys/devices/platform/soc/<addr>.fman/<addr>.ethernet/dpaa-ethernet.<id>/net/fm<nr>-mac<nr>/bpids