1 Documentation for /proc/sys/net/*
2 (c) 1999 Terrehon Bowden <terrehon@pacbell.net>
3 Bodo Bauer <bb@ricochet.net>
4 (c) 2000 Jorge Nerin <comandante@zaralinux.com>
5 (c) 2009 Shen Feng <shen@cn.fujitsu.com>
7 For general info and legal blurb, please look in README.
9 ==============================================================
11 This file contains the documentation for the sysctl files in
14 The interface to the networking parts of the kernel is located in
15 /proc/sys/net. The following table shows all possible subdirectories. You may
16 see only some of them, depending on your kernel's configuration.
19 Table : Subdirectories in /proc/sys/net
20 ..............................................................................
21 Directory Content Directory Content
22 core General parameter appletalk Appletalk protocol
23 unix Unix domain sockets netrom NET/ROM
24 802 E802 protocol ax25 AX25
25 ethernet Ethernet protocol rose X.25 PLP layer
26 ipv4 IP version 4 x25 X.25 protocol
27 ipx IPX token-ring IBM token ring
28 bridge Bridging decnet DEC net
29 ipv6 IP version 6 tipc TIPC
30 ..............................................................................
32 1. /proc/sys/net/core - Network core options
33 -------------------------------------------------------
38 This enables the BPF Just in Time (JIT) compiler. BPF is a flexible
39 and efficient infrastructure allowing to execute bytecode at various
40 hook points. It is used in a number of Linux kernel subsystems such
41 as networking (e.g. XDP, tc), tracing (e.g. kprobes, uprobes, tracepoints)
42 and security (e.g. seccomp). LLVM has a BPF back end that can compile
43 restricted C into a sequence of BPF instructions. After program load
44 through bpf(2) and passing a verifier in the kernel, a JIT will then
45 translate these BPF proglets into native CPU instructions. There are
46 two flavors of JITs, the newer eBPF JIT currently supported on:
55 And the older cBPF JIT supported on the following archs:
60 eBPF JITs are a superset of cBPF JITs, meaning the kernel will
61 migrate cBPF instructions into eBPF instructions and then JIT
62 compile them transparently. Older cBPF JITs can only translate
63 tcpdump filters, seccomp rules, etc, but not mentioned eBPF
64 programs loaded through bpf(2).
67 0 - disable the JIT (default value)
69 2 - enable the JIT and ask the compiler to emit traces on kernel log.
74 This enables hardening for the BPF JIT compiler. Supported are eBPF
75 JIT backends. Enabling hardening trades off performance, but can
76 mitigate JIT spraying.
78 0 - disable JIT hardening (default value)
79 1 - enable JIT hardening for unprivileged users only
80 2 - enable JIT hardening for all users
85 When BPF JIT compiler is enabled, then compiled images are unknown
86 addresses to the kernel, meaning they neither show up in traces nor
87 in /proc/kallsyms. This enables export of these addresses, which can
88 be used for debugging/tracing. If bpf_jit_harden is enabled, this
91 0 - disable JIT kallsyms export (default value)
92 1 - enable JIT kallsyms export for privileged users only
97 The maximum number of packets that kernel can handle on a NAPI interrupt,
98 it's a Per-CPU variable. For drivers that support LRO or GRO_HW, a hardware
99 aggregated packet is counted as one packet in this context.
106 RPS (e.g. RFS, aRFS) processing is competing with the registered NAPI poll function
107 of the driver for the per softirq cycle netdev_budget. This parameter influences
108 the proportion of the configured netdev_budget that is spent on RPS based packet
109 processing during RX softirq cycles. It is further meant for making current
110 dev_weight adaptable for asymmetric CPU needs on RX/TX side of the network stack.
111 (see dev_weight_tx_bias) It is effective on a per CPU basis. Determination is based
112 on dev_weight and is calculated multiplicative (dev_weight * dev_weight_rx_bias).
118 Scales the maximum number of packets that can be processed during a TX softirq cycle.
119 Effective on a per CPU basis. Allows scaling of current dev_weight for asymmetric
120 net stack processing needs. Be careful to avoid making TX softirq processing a CPU hog.
121 Calculation is based on dev_weight (dev_weight * dev_weight_tx_bias).
127 The default queuing discipline to use for network devices. This allows
128 overriding the default of pfifo_fast with an alternative. Since the default
129 queuing discipline is created without additional parameters so is best suited
130 to queuing disciplines that work well without configuration like stochastic
131 fair queue (sfq), CoDel (codel) or fair queue CoDel (fq_codel). Don't use
132 queuing disciplines like Hierarchical Token Bucket or Deficit Round Robin
133 which require setting up classes and bandwidths. Note that physical multiqueue
134 interfaces still use mq as root qdisc, which in turn uses this default for its
135 leaves. Virtual devices (like e.g. lo or veth) ignore this setting and instead
141 Low latency busy poll timeout for socket reads. (needs CONFIG_NET_RX_BUSY_POLL)
142 Approximate time in us to busy loop waiting for packets on the device queue.
143 This sets the default value of the SO_BUSY_POLL socket option.
144 Can be set or overridden per socket by setting socket option SO_BUSY_POLL,
145 which is the preferred method of enabling. If you need to enable the feature
146 globally via sysctl, a value of 50 is recommended.
147 Will increase power usage.
152 Low latency busy poll timeout for poll and select. (needs CONFIG_NET_RX_BUSY_POLL)
153 Approximate time in us to busy loop waiting for events.
154 Recommended value depends on the number of sockets you poll on.
155 For several sockets 50, for several hundreds 100.
156 For more than that you probably want to use epoll.
157 Note that only sockets with SO_BUSY_POLL set will be busy polled,
158 so you want to either selectively set SO_BUSY_POLL on those sockets or set
159 sysctl.net.busy_read globally.
160 Will increase power usage.
166 The default setting of the socket receive buffer in bytes.
171 The maximum receive socket buffer size in bytes.
175 Allow processes to receive tx timestamps looped together with the original
176 packet contents. If disabled, transmit timestamp requests from unprivileged
177 processes are dropped unless socket option SOF_TIMESTAMPING_OPT_TSONLY is set.
184 The default setting (in bytes) of the socket send buffer.
189 The maximum send socket buffer size in bytes.
191 message_burst and message_cost
192 ------------------------------
194 These parameters are used to limit the warning messages written to the kernel
195 log from the networking code. They enforce a rate limit to make a
196 denial-of-service attack impossible. A higher message_cost factor, results in
197 fewer messages that will be written. Message_burst controls when messages will
198 be dropped. The default settings limit warning messages to one every five
204 This sysctl is now unused.
206 This was used to control console messages from the networking stack that
207 occur because of problems on the network like duplicate address or bad
210 These messages are now emitted at KERN_DEBUG and can generally be enabled
211 and controlled by the dynamic_debug facility.
216 Maximum number of packets taken from all interfaces in one polling cycle (NAPI
217 poll). In one polling cycle interfaces which are registered to polling are
218 probed in a round-robin manner. Also, a polling cycle may not exceed
219 netdev_budget_usecs microseconds, even if netdev_budget has not been
223 ---------------------
225 Maximum number of microseconds in one NAPI polling cycle. Polling
226 will exit when either netdev_budget_usecs have elapsed during the
227 poll cycle or the number of packets processed reaches netdev_budget.
232 Maximum number of packets, queued on the INPUT side, when the interface
233 receives packets faster than kernel can process them.
238 RSS (Receive Side Scaling) enabled drivers use a 40 bytes host key that is
240 Some user space might need to gather its content even if drivers do not
241 provide ethtool -x support yet.
243 myhost:~# cat /proc/sys/net/core/netdev_rss_key
244 84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47:25:42:97:74:ca:56:bb:b6:a1:d8: ... (52 bytes total)
246 File contains nul bytes if no driver ever called netdev_rss_key_fill() function.
248 /proc/sys/net/core/netdev_rss_key contains 52 bytes of key,
249 but most drivers only use 40 bytes of it.
251 myhost:~# ethtool -x eth0
252 RX flow hash indirection table for eth0 with 8 RX ring(s):
255 84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47:25:42:97:74:ca:56:bb:b6:a1:d8:43:e3:c9:0c:fd:17:55:c2:3a:4d:69:ed:f1:42:89
257 netdev_tstamp_prequeue
258 ----------------------
260 If set to 0, RX packet timestamps can be sampled after RPS processing, when
261 the target CPU processes packets. It might give some delay on timestamps, but
262 permit to distribute the load on several cpus.
264 If set to 1 (default), timestamps are sampled as soon as possible, before
270 Maximum ancillary buffer size allowed per socket. Ancillary data is a sequence
271 of struct cmsghdr structures with appended data.
273 2. /proc/sys/net/unix - Parameters for Unix domain sockets
274 -------------------------------------------------------
276 There is only one file in this directory.
277 unix_dgram_qlen limits the max number of datagrams queued in Unix domain
278 socket's buffer. It will not take effect unless PF_UNIX flag is specified.
281 3. /proc/sys/net/ipv4 - IPV4 settings
282 -------------------------------------------------------
283 Please see: Documentation/networking/ip-sysctl.txt and ipvs-sysctl.txt for
284 descriptions of these entries.
288 -------------------------------------------------------
290 The /proc/sys/net/appletalk directory holds the Appletalk configuration data
291 when Appletalk is loaded. The configurable parameters are:
296 The amount of time we keep an ARP entry before expiring it. Used to age out
302 The amount of time we will spend trying to resolve an Appletalk address.
304 aarp-retransmit-limit
305 ---------------------
307 The number of times we will retransmit a query before giving up.
312 Controls the rate at which expires are checked.
314 The directory /proc/net/appletalk holds the list of active Appletalk sockets
317 The fields indicate the DDP type, the local address (in network:node format)
318 the remote address, the size of the transmit pending queue, the size of the
319 received queue (bytes waiting for applications to read) the state and the uid
322 /proc/net/atalk_iface lists all the interfaces configured for appletalk.It
323 shows the name of the interface, its Appletalk address, the network range on
324 that address (or network number for phase 1 networks), and the status of the
327 /proc/net/atalk_route lists each known network route. It lists the target
328 (network) that the route leads to, the router (may be directly connected), the
329 route flags, and the device the route is using.
333 -------------------------------------------------------
335 The IPX protocol has no tunable values in proc/sys/net.
337 The IPX protocol does, however, provide proc/net/ipx. This lists each IPX
338 socket giving the local and remote addresses in Novell format (that is
339 network:node:port). In accordance with the strange Novell tradition,
340 everything but the port is in hex. Not_Connected is displayed for sockets that
341 are not tied to a specific remote address. The Tx and Rx queue sizes indicate
342 the number of bytes pending for transmission and reception. The state
343 indicates the state the socket is in and the uid is the owning uid of the
346 The /proc/net/ipx_interface file lists all IPX interfaces. For each interface
347 it gives the network number, the node number, and indicates if the network is
348 the primary network. It also indicates which device it is bound to (or
349 Internal for internal networks) and the Frame Type if appropriate. Linux
350 supports 802.3, 802.2, 802.2 SNAP and DIX (Blue Book) ethernet framing for
353 The /proc/net/ipx_route table holds a list of IPX routes. For each route it
354 gives the destination network, the router node (or Directly) and the network
355 address of the router (or Connected) for internal networks.
358 -------------------------------------------------------
363 The TIPC protocol now has a tunable for the receive memory, similar to the
364 tcp_rmem - i.e. a vector of 3 INTEGERs: (min, default, max)
366 # cat /proc/sys/net/tipc/tipc_rmem
367 4252725 34021800 68043600
370 The max value is set to CONN_OVERLOAD_LIMIT, and the default and min values
371 are scaled (shifted) versions of that same value. Note that the min value
372 is not at this point in time used in any meaningful way, but the triplet is
373 preserved in order to be consistent with things like tcp_rmem.
378 TIPC name table updates are distributed asynchronously in a cluster, without
379 any form of transaction handling. This means that different race scenarios are
380 possible. One such is that a name withdrawal sent out by one node and received
381 by another node may arrive after a second, overlapping name publication already
382 has been accepted from a third node, although the conflicting updates
383 originally may have been issued in the correct sequential order.
384 If named_timeout is nonzero, failed topology updates will be placed on a defer
385 queue until another event arrives that clears the error, or until the timeout
386 expires. Value is in milliseconds.