1 The QorIQ DPAA Ethernet Driver
2 ==============================
5 Madalin Bucur <madalin.bucur@nxp.com>
6 Camelia Groza <camelia.groza@nxp.com>
11 - DPAA Ethernet Overview
12 - DPAA Ethernet Supported SoCs
13 - Configuring DPAA Ethernet in your kernel
14 - DPAA Ethernet Frame Processing
15 - DPAA Ethernet Features
18 DPAA Ethernet Overview
19 ======================
21 DPAA stands for Data Path Acceleration Architecture and it is a
22 set of networking acceleration IPs that are available on several
23 generations of SoCs, both on PowerPC and ARM64.
25 The Freescale DPAA architecture consists of a series of hardware blocks
26 that support Ethernet connectivity. The Ethernet driver depends upon the
27 following drivers in the Linux kernel:
29 - Peripheral Access Memory Unit (PAMU) (* needed only for PPC platforms)
31 - Frame Manager (FMan)
32 drivers/net/ethernet/freescale/fman
33 - Queue Manager (QMan), Buffer Manager (BMan)
36 A simplified view of the dpaa_eth interfaces mapped to FMan MACs:
38 dpaa_eth /eth0\ ... /ethN\
40 ------------- ---- ----------- ---- -------------
41 -Ports / Tx Rx \ ... / Tx Rx \
43 -MACs | MAC0 | | MACN |
44 / dtsec0 \ ... / dtsecN \ (or tgec)
46 --------- -------------- --- -------------- ---------
47 FMan, FMan Port, FMan SP, FMan MURAM drivers
48 ---------------------------------------------------------
49 FMan HW blocks: MURAM, MACs, Ports, SP
50 ---------------------------------------------------------
52 The dpaa_eth relation to the QMan, BMan and FMan:
53 ________________________________
56 --------- -^- -^- -^- --- ---------
57 QMan driver / \ / \ / \ \ / | BMan |
58 |Rx | |Rx | |Tx | |Tx | | driver |
59 --------- |Dfl| |Err| |Cnf| |FQs| | |
60 QMan HW |FQ | |FQ | |FQs| | | | |
62 --------- --- --- --- -v- ---------
64 | FMan HW FMan BMI | BMan HW |
65 ----------------------- --------
67 where the acronyms used above (and in the code) are:
68 DPAA = Data Path Acceleration Architecture
69 FMan = DPAA Frame Manager
70 QMan = DPAA Queue Manager
71 BMan = DPAA Buffers Manager
72 QMI = QMan interface in FMan
73 BMI = BMan interface in FMan
74 FMan SP = FMan Storage Profiles
75 MURAM = Multi-user RAM in FMan
77 Rx Dfl FQ = default reception FQ
78 Rx Err FQ = Rx error frames FQ
79 Tx Cnf FQ = Tx confirmation FQs
80 Tx FQs = transmission frame queues
81 dtsec = datapath three speed Ethernet controller (10/100/1000 Mbps)
82 tgec = ten gigabit Ethernet controller (10 Gbps)
83 memac = multirate Ethernet MAC (10/100/1000/10000)
85 DPAA Ethernet Supported SoCs
86 ============================
88 The DPAA drivers enable the Ethernet controllers present on the following SoCs:
109 Configuring DPAA Ethernet in your kernel
110 ========================================
112 To enable the DPAA Ethernet driver, the following Kconfig options are required:
114 # common for arch/arm64 and arch/powerpc platforms
117 CONFIG_FSL_DPAA_ETH=y
118 CONFIG_FSL_XGMAC_MDIO=y
120 # for arch/powerpc only
123 # common options needed for the PHYs used on the RDBs
126 CONFIG_AQUANTIA_PHY=y
128 DPAA Ethernet Frame Processing
129 ==============================
131 On Rx, buffers for the incoming frames are retrieved from one of the three
132 existing buffers pools. The driver initializes and seeds these, each with
133 buffers of different sizes: 1KB, 2KB and 4KB.
135 On Tx, all transmitted frames are returned to the driver through Tx
136 confirmation frame queues. The driver is then responsible for freeing the
137 buffers. In order to do this properly, a backpointer is added to the buffer
138 before transmission that points to the skb. When the buffer returns to the
139 driver on a confirmation FQ, the skb can be correctly consumed.
141 DPAA Ethernet Features
142 ======================
144 Currently the DPAA Ethernet driver enables the basic features required for
145 a Linux Ethernet driver. The support for advanced features will be added
148 The driver has Rx and Tx checksum offloading for UDP and TCP. Currently the Rx
149 checksum offload feature is enabled by default and cannot be controlled through
152 The driver has support for multiple prioritized Tx traffic classes. Priorities
153 range from 0 (lowest) to 3 (highest). These are mapped to HW workqueues with
154 strict priority levels. Each traffic class contains NR_CPU TX queues. By
155 default, only one traffic class is enabled and the lowest priority Tx queues
156 are used. Higher priority traffic classes can be enabled with the mqprio
157 qdisc. For example, all four traffic classes are enabled on an interface with
158 the following command. Furthermore, skb priority levels are mapped to traffic
161 * priorities 0 to 3 - traffic class 0 (low priority)
162 * priorities 4 to 7 - traffic class 1 (medium-low priority)
163 * priorities 8 to 11 - traffic class 2 (medium-high priority)
164 * priorities 12 to 15 - traffic class 3 (high priority)
166 tc qdisc add dev <int> root handle 1: \
167 mqprio num_tc 4 map 0 0 0 0 1 1 1 1 2 2 2 2 3 3 3 3 hw 1
172 The following statistics are exported for each interface through ethtool:
174 - interrupt count per CPU
175 - Rx packets count per CPU
176 - Tx packets count per CPU
177 - Tx confirmed packets count per CPU
178 - Tx S/G frames count per CPU
179 - Tx error count per CPU
180 - Rx error count per CPU
181 - Rx error count per type
182 - congestion related statistics:
184 - time spent in congestion
185 - number of time the device entered congestion
186 - dropped packets count per cause
188 The driver also exports the following information in sysfs:
190 - the FQ IDs for each FQ type
191 /sys/devices/platform/dpaa-ethernet.0/net/<int>/fqids
193 - the IDs of the buffer pools in use
194 /sys/devices/platform/dpaa-ethernet.0/net/<int>/bpids