1 =========================================================
2 NVIDIA Tegra SoC Uncore Performance Monitoring Unit (PMU)
3 =========================================================
5 The NVIDIA Tegra SoC includes various system PMUs to measure key performance
6 metrics like memory bandwidth, latency, and utilization:
8 * Scalable Coherency Fabric (SCF)
17 The PMUs in this document are based on ARM CoreSight PMU Architecture as
18 described in document: ARM IHI 0091. Since this is a standard architecture, the
19 PMUs are managed by a common driver "arm-cs-arch-pmu". This driver describes
20 the available events and configuration of each PMU in sysfs. Please see the
21 sections below to get the sysfs path of each PMU. Like other uncore PMU drivers,
22 the driver provides "cpumask" sysfs attribute to show the CPU id used to handle
23 the PMU event. There is also "associated_cpus" sysfs attribute, which contains a
24 list of CPUs associated with the PMU instance.
31 The SCF PMU monitors system level cache events, CPU traffic, and
32 strongly-ordered (SO) PCIE write traffic to local/remote memory. Please see
33 :ref:`NVIDIA_Uncore_PMU_Traffic_Coverage_Section` for more info about the PMU
36 The events and configuration options of this PMU device are described in sysfs,
37 see /sys/bus/event_sources/devices/nvidia_scf_pmu_<socket-id>.
41 * Count event id 0x0 in socket 0::
43 perf stat -a -e nvidia_scf_pmu_0/event=0x0/
45 * Count event id 0x0 in socket 1::
47 perf stat -a -e nvidia_scf_pmu_1/event=0x0/
52 The NVLink-C2C0 PMU monitors incoming traffic from a GPU/CPU connected with
53 NVLink-C2C (Chip-2-Chip) interconnect. The type of traffic captured by this PMU
54 varies dependent on the chip configuration:
56 * NVIDIA Grace Hopper Superchip: Hopper GPU is connected with Grace SoC.
58 In this config, the PMU captures GPU ATS translated or EGM traffic from the GPU.
60 * NVIDIA Grace CPU Superchip: two Grace CPU SoCs are connected.
62 In this config, the PMU captures read and relaxed ordered (RO) writes from
63 PCIE device of the remote SoC.
65 Please see :ref:`NVIDIA_Uncore_PMU_Traffic_Coverage_Section` for more info about
66 the PMU traffic coverage.
68 The events and configuration options of this PMU device are described in sysfs,
69 see /sys/bus/event_sources/devices/nvidia_nvlink_c2c0_pmu_<socket-id>.
73 * Count event id 0x0 from the GPU/CPU connected with socket 0::
75 perf stat -a -e nvidia_nvlink_c2c0_pmu_0/event=0x0/
77 * Count event id 0x0 from the GPU/CPU connected with socket 1::
79 perf stat -a -e nvidia_nvlink_c2c0_pmu_1/event=0x0/
81 * Count event id 0x0 from the GPU/CPU connected with socket 2::
83 perf stat -a -e nvidia_nvlink_c2c0_pmu_2/event=0x0/
85 * Count event id 0x0 from the GPU/CPU connected with socket 3::
87 perf stat -a -e nvidia_nvlink_c2c0_pmu_3/event=0x0/
92 The NVLink-C2C1 PMU monitors incoming traffic from a GPU connected with
93 NVLink-C2C (Chip-2-Chip) interconnect. This PMU captures untranslated GPU
94 traffic, in contrast with NvLink-C2C0 PMU that captures ATS translated traffic.
95 Please see :ref:`NVIDIA_Uncore_PMU_Traffic_Coverage_Section` for more info about
96 the PMU traffic coverage.
98 The events and configuration options of this PMU device are described in sysfs,
99 see /sys/bus/event_sources/devices/nvidia_nvlink_c2c1_pmu_<socket-id>.
103 * Count event id 0x0 from the GPU connected with socket 0::
105 perf stat -a -e nvidia_nvlink_c2c1_pmu_0/event=0x0/
107 * Count event id 0x0 from the GPU connected with socket 1::
109 perf stat -a -e nvidia_nvlink_c2c1_pmu_1/event=0x0/
111 * Count event id 0x0 from the GPU connected with socket 2::
113 perf stat -a -e nvidia_nvlink_c2c1_pmu_2/event=0x0/
115 * Count event id 0x0 from the GPU connected with socket 3::
117 perf stat -a -e nvidia_nvlink_c2c1_pmu_3/event=0x0/
122 The CNVLink PMU monitors traffic from GPU and PCIE device on remote sockets
123 to local memory. For PCIE traffic, this PMU captures read and relaxed ordered
124 (RO) write traffic. Please see :ref:`NVIDIA_Uncore_PMU_Traffic_Coverage_Section`
125 for more info about the PMU traffic coverage.
127 The events and configuration options of this PMU device are described in sysfs,
128 see /sys/bus/event_sources/devices/nvidia_cnvlink_pmu_<socket-id>.
130 Each SoC socket can be connected to one or more sockets via CNVLink. The user can
131 use "rem_socket" bitmap parameter to select the remote socket(s) to monitor.
132 Each bit represents the socket number, e.g. "rem_socket=0xE" corresponds to
134 /sys/bus/event_sources/devices/nvidia_cnvlink_pmu_<socket-id>/format/rem_socket
135 shows the valid bits that can be set in the "rem_socket" parameter.
137 The PMU can not distinguish the remote traffic initiator, therefore it does not
138 provide filter to select the traffic source to monitor. It reports combined
139 traffic from remote GPU and PCIE devices.
143 * Count event id 0x0 for the traffic from remote socket 1, 2, and 3 to socket 0::
145 perf stat -a -e nvidia_cnvlink_pmu_0/event=0x0,rem_socket=0xE/
147 * Count event id 0x0 for the traffic from remote socket 0, 2, and 3 to socket 1::
149 perf stat -a -e nvidia_cnvlink_pmu_1/event=0x0,rem_socket=0xD/
151 * Count event id 0x0 for the traffic from remote socket 0, 1, and 3 to socket 2::
153 perf stat -a -e nvidia_cnvlink_pmu_2/event=0x0,rem_socket=0xB/
155 * Count event id 0x0 for the traffic from remote socket 0, 1, and 2 to socket 3::
157 perf stat -a -e nvidia_cnvlink_pmu_3/event=0x0,rem_socket=0x7/
163 The PCIE PMU monitors all read/write traffic from PCIE root ports to
164 local/remote memory. Please see :ref:`NVIDIA_Uncore_PMU_Traffic_Coverage_Section`
165 for more info about the PMU traffic coverage.
167 The events and configuration options of this PMU device are described in sysfs,
168 see /sys/bus/event_sources/devices/nvidia_pcie_pmu_<socket-id>.
170 Each SoC socket can support multiple root ports. The user can use
171 "root_port" bitmap parameter to select the port(s) to monitor, i.e.
172 "root_port=0xF" corresponds to root port 0 to 3.
173 /sys/bus/event_sources/devices/nvidia_pcie_pmu_<socket-id>/format/root_port
174 shows the valid bits that can be set in the "root_port" parameter.
178 * Count event id 0x0 from root port 0 and 1 of socket 0::
180 perf stat -a -e nvidia_pcie_pmu_0/event=0x0,root_port=0x3/
182 * Count event id 0x0 from root port 0 and 1 of socket 1::
184 perf stat -a -e nvidia_pcie_pmu_1/event=0x0,root_port=0x3/
186 .. _NVIDIA_Uncore_PMU_Traffic_Coverage_Section:
191 The PMU traffic coverage may vary dependent on the chip configuration:
193 * **NVIDIA Grace Hopper Superchip**: Hopper GPU is connected with Grace SoC.
195 Example configuration with two Grace SoCs::
197 ********************************* *********************************
198 * SOCKET-A * * SOCKET-B *
200 * :::::::: * * :::::::: *
201 * : PCIE : * * : PCIE : *
202 * :::::::: * * :::::::: *
205 * ::::::: ::::::::: * * ::::::::: ::::::: *
206 * : : : : * * : : : : *
207 * : GPU :<--NVLink-->: Grace :<---CNVLink--->: Grace :<--NVLink-->: GPU : *
208 * : : C2C : SoC : * * : SoC : C2C : : *
209 * ::::::: ::::::::: * * ::::::::: ::::::: *
212 * &&&&&&&& &&&&&&&& * * &&&&&&&& &&&&&&&& *
213 * & GMEM & & CMEM & * * & CMEM & & GMEM & *
214 * &&&&&&&& &&&&&&&& * * &&&&&&&& &&&&&&&& *
216 ********************************* *********************************
218 GMEM = GPU Memory (e.g. HBM)
219 CMEM = CPU Memory (e.g. LPDDR5X)
222 | Following table contains traffic coverage of Grace SoC PMU in socket-A:
226 +--------------+-------+-----------+-----------+-----+----------+----------+
228 + +-------+-----------+-----------+-----+----------+----------+
229 | Destination | |GPU ATS |GPU Not-ATS| | Socket-B | Socket-B |
230 | |PCI R/W|Translated,|Translated | CPU | CPU/PCIE1| GPU/PCIE2|
232 +==============+=======+===========+===========+=====+==========+==========+
233 | Local | PCIE |NVLink-C2C0|NVLink-C2C1| SCF | SCF PMU | CNVLink |
234 | SYSRAM/CMEM | PMU |PMU |PMU | PMU | | PMU |
235 +--------------+-------+-----------+-----------+-----+----------+----------+
236 | Local GMEM | PCIE | N/A |NVLink-C2C1| SCF | SCF PMU | CNVLink |
237 | | PMU | |PMU | PMU | | PMU |
238 +--------------+-------+-----------+-----------+-----+----------+----------+
239 | Remote | PCIE |NVLink-C2C0|NVLink-C2C1| SCF | | |
240 | SYSRAM/CMEM | PMU |PMU |PMU | PMU | N/A | N/A |
241 | over CNVLink | | | | | | |
242 +--------------+-------+-----------+-----------+-----+----------+----------+
243 | Remote GMEM | PCIE |NVLink-C2C0|NVLink-C2C1| SCF | | |
244 | over CNVLink | PMU |PMU |PMU | PMU | N/A | N/A |
245 +--------------+-------+-----------+-----------+-----+----------+----------+
247 PCIE1 traffic represents strongly ordered (SO) writes.
248 PCIE2 traffic represents reads and relaxed ordered (RO) writes.
250 * **NVIDIA Grace CPU Superchip**: two Grace CPU SoCs are connected.
252 Example configuration with two Grace SoCs::
254 ******************* *******************
255 * SOCKET-A * * SOCKET-B *
257 * :::::::: * * :::::::: *
258 * : PCIE : * * : PCIE : *
259 * :::::::: * * :::::::: *
262 * ::::::::: * * ::::::::: *
264 * : Grace :<--------NVLink------->: Grace : *
265 * : SoC : * C2C * : SoC : *
266 * ::::::::: * * ::::::::: *
269 * &&&&&&&& * * &&&&&&&& *
270 * & CMEM & * * & CMEM & *
271 * &&&&&&&& * * &&&&&&&& *
273 ******************* *******************
275 GMEM = GPU Memory (e.g. HBM)
276 CMEM = CPU Memory (e.g. LPDDR5X)
279 | Following table contains traffic coverage of Grace SoC PMU in socket-A:
283 +-----------------+-----------+---------+----------+-------------+
285 + +-----------+---------+----------+-------------+
286 | Destination | | | Socket-B | Socket-B |
287 | | PCI R/W | CPU | CPU/PCIE1| PCIE2 |
289 +=================+===========+=========+==========+=============+
290 | Local | PCIE PMU | SCF PMU | SCF PMU | NVLink-C2C0 |
291 | SYSRAM/CMEM | | | | PMU |
292 +-----------------+-----------+---------+----------+-------------+
294 | SYSRAM/CMEM | PCIE PMU | SCF PMU | N/A | N/A |
295 | over NVLink-C2C | | | | |
296 +-----------------+-----------+---------+----------+-------------+
298 PCIE1 traffic represents strongly ordered (SO) writes.
299 PCIE2 traffic represents reads and relaxed ordered (RO) writes.