7 This driver is compatible with Windows Server 2012 R2, 2016 and
15 The netvsc driver supports checksum offload as long as the
16 Hyper-V host version does. Windows Server 2016 and Azure
17 support checksum offload for TCP and UDP for both IPv4 and
18 IPv6. Windows Server 2012 only supports checksum offload for TCP.
22 Hyper-V supports receive side scaling. For TCP & UDP, packets can
23 be distributed among available queues based on IP address and port
26 For TCP & UDP, we can switch hash level between L3 and L4 by ethtool
27 command. TCP/UDP over IPv4 and v6 can be set differently. The default
28 hash level is L4. We currently only allow switching TX hash level
29 from within the guests.
31 On Azure, fragmented UDP packets have high loss rate with L4
32 hashing. Using L3 hashing is recommended in this case.
34 For example, for UDP over IPv4 on eth0:
35 To include UDP port numbers in hashing:
36 ethtool -N eth0 rx-flow-hash udp4 sdfn
37 To exclude UDP port numbers in hashing:
38 ethtool -N eth0 rx-flow-hash udp4 sd
39 To show UDP hash level:
40 ethtool -n eth0 rx-flow-hash udp4
42 Generic Receive Offload, aka GRO
43 --------------------------------
44 The driver supports GRO and it is enabled by default. GRO coalesces
45 like packets and significantly reduces CPU usage under heavy Rx
48 Large Receive Offload (LRO), or Receive Side Coalescing (RSC)
49 -------------------------------------------------------------
50 The driver supports LRO/RSC in the vSwitch feature. It reduces the per packet
51 processing overhead by coalescing multiple TCP segments when possible. The
52 feature is enabled by default on VMs running on Windows Server 2019 and
53 later. It may be changed by ethtool command:
54 ethtool -K eth0 lro on
55 ethtool -K eth0 lro off
59 Hyper-V supports SR-IOV as a hardware acceleration option. If SR-IOV
60 is enabled in both the vSwitch and the guest configuration, then the
61 Virtual Function (VF) device is passed to the guest as a PCI
62 device. In this case, both a synthetic (netvsc) and VF device are
63 visible in the guest OS and both NIC's have the same MAC address.
65 The VF is enslaved by netvsc device. The netvsc driver will transparently
66 switch the data path to the VF when it is available and up.
67 Network state (addresses, firewall, etc) should be applied only to the
68 netvsc device; the slave device should not be accessed directly in
69 most cases. The exceptions are if some special queue discipline or
70 flow direction is desired, these should be applied directly to the
75 Packets are received into a receive area which is created when device
76 is probed. The receive area is broken into MTU sized chunks and each may
77 contain one or more packets. The number of receive sections may be changed
78 via ethtool Rx ring parameters.
80 There is a similar send buffer which is used to aggregate packets for sending.
81 The send area is broken into chunks of 6144 bytes, each of section may
82 contain one or more packets. The send buffer is an optimization, the driver
83 will use slower method to handle very large packets or if the send buffer
88 XDP (eXpress Data Path) is a feature that runs eBPF bytecode at the early
89 stage when packets arrive at a NIC card. The goal is to increase performance
90 for packet processing, reducing the overhead of SKB allocation and other
93 hv_netvsc supports XDP in native mode, and transparently sets the XDP
94 program on the associated VF NIC as well.
96 Setting / unsetting XDP program on synthetic NIC (netvsc) propagates to
97 VF NIC automatically. Setting / unsetting XDP program on VF NIC directly
98 is not recommended, also not propagated to synthetic NIC, and may be
99 overwritten by setting of synthetic NIC.
101 XDP program cannot run with LRO (RSC) enabled, so you need to disable LRO
103 ethtool -K eth0 lro off
105 XDP_REDIRECT action is not yet supported.