Based on kernel version 4.16.1. Page generated on 2018-04-09 11:53 EST.
1 Documentation for /proc/sys/net/* 2 (c) 1999 Terrehon Bowden <terrehon@pacbell.net> 3 Bodo Bauer <bb@ricochet.net> 4 (c) 2000 Jorge Nerin <comandante@zaralinux.com> 5 (c) 2009 Shen Feng <shen@cn.fujitsu.com> 6 7 For general info and legal blurb, please look in README. 8 9 ============================================================== 10 11 This file contains the documentation for the sysctl files in 12 /proc/sys/net 13 14 The interface to the networking parts of the kernel is located in 15 /proc/sys/net. The following table shows all possible subdirectories. You may 16 see only some of them, depending on your kernel's configuration. 17 18 19 Table : Subdirectories in /proc/sys/net 20 .............................................................................. 21 Directory Content Directory Content 22 core General parameter appletalk Appletalk protocol 23 unix Unix domain sockets netrom NET/ROM 24 802 E802 protocol ax25 AX25 25 ethernet Ethernet protocol rose X.25 PLP layer 26 ipv4 IP version 4 x25 X.25 protocol 27 ipx IPX token-ring IBM token ring 28 bridge Bridging decnet DEC net 29 ipv6 IP version 6 tipc TIPC 30 .............................................................................. 31 32 1. /proc/sys/net/core - Network core options 33 ------------------------------------------------------- 34 35 bpf_jit_enable 36 -------------- 37 38 This enables the BPF Just in Time (JIT) compiler. BPF is a flexible 39 and efficient infrastructure allowing to execute bytecode at various 40 hook points. It is used in a number of Linux kernel subsystems such 41 as networking (e.g. XDP, tc), tracing (e.g. kprobes, uprobes, tracepoints) 42 and security (e.g. seccomp). LLVM has a BPF back end that can compile 43 restricted C into a sequence of BPF instructions. After program load 44 through bpf(2) and passing a verifier in the kernel, a JIT will then 45 translate these BPF proglets into native CPU instructions. There are 46 two flavors of JITs, the newer eBPF JIT currently supported on: 47 - x86_64 48 - arm64 49 - arm32 50 - ppc64 51 - sparc64 52 - mips64 53 - s390x 54 55 And the older cBPF JIT supported on the following archs: 56 - mips 57 - ppc 58 - sparc 59 60 eBPF JITs are a superset of cBPF JITs, meaning the kernel will 61 migrate cBPF instructions into eBPF instructions and then JIT 62 compile them transparently. Older cBPF JITs can only translate 63 tcpdump filters, seccomp rules, etc, but not mentioned eBPF 64 programs loaded through bpf(2). 65 66 Values : 67 0 - disable the JIT (default value) 68 1 - enable the JIT 69 2 - enable the JIT and ask the compiler to emit traces on kernel log. 70 71 bpf_jit_harden 72 -------------- 73 74 This enables hardening for the BPF JIT compiler. Supported are eBPF 75 JIT backends. Enabling hardening trades off performance, but can 76 mitigate JIT spraying. 77 Values : 78 0 - disable JIT hardening (default value) 79 1 - enable JIT hardening for unprivileged users only 80 2 - enable JIT hardening for all users 81 82 bpf_jit_kallsyms 83 ---------------- 84 85 When BPF JIT compiler is enabled, then compiled images are unknown 86 addresses to the kernel, meaning they neither show up in traces nor 87 in /proc/kallsyms. This enables export of these addresses, which can 88 be used for debugging/tracing. If bpf_jit_harden is enabled, this 89 feature is disabled. 90 Values : 91 0 - disable JIT kallsyms export (default value) 92 1 - enable JIT kallsyms export for privileged users only 93 94 dev_weight 95 -------------- 96 97 The maximum number of packets that kernel can handle on a NAPI interrupt, 98 it's a Per-CPU variable. For drivers that support LRO or GRO_HW, a hardware 99 aggregated packet is counted as one packet in this context. 100 101 Default: 64 102 103 dev_weight_rx_bias 104 -------------- 105 106 RPS (e.g. RFS, aRFS) processing is competing with the registered NAPI poll function 107 of the driver for the per softirq cycle netdev_budget. This parameter influences 108 the proportion of the configured netdev_budget that is spent on RPS based packet 109 processing during RX softirq cycles. It is further meant for making current 110 dev_weight adaptable for asymmetric CPU needs on RX/TX side of the network stack. 111 (see dev_weight_tx_bias) It is effective on a per CPU basis. Determination is based 112 on dev_weight and is calculated multiplicative (dev_weight * dev_weight_rx_bias). 113 Default: 1 114 115 dev_weight_tx_bias 116 -------------- 117 118 Scales the maximum number of packets that can be processed during a TX softirq cycle. 119 Effective on a per CPU basis. Allows scaling of current dev_weight for asymmetric 120 net stack processing needs. Be careful to avoid making TX softirq processing a CPU hog. 121 Calculation is based on dev_weight (dev_weight * dev_weight_tx_bias). 122 Default: 1 123 124 default_qdisc 125 -------------- 126 127 The default queuing discipline to use for network devices. This allows 128 overriding the default of pfifo_fast with an alternative. Since the default 129 queuing discipline is created without additional parameters so is best suited 130 to queuing disciplines that work well without configuration like stochastic 131 fair queue (sfq), CoDel (codel) or fair queue CoDel (fq_codel). Don't use 132 queuing disciplines like Hierarchical Token Bucket or Deficit Round Robin 133 which require setting up classes and bandwidths. Note that physical multiqueue 134 interfaces still use mq as root qdisc, which in turn uses this default for its 135 leaves. Virtual devices (like e.g. lo or veth) ignore this setting and instead 136 default to noqueue. 137 Default: pfifo_fast 138 139 busy_read 140 ---------------- 141 Low latency busy poll timeout for socket reads. (needs CONFIG_NET_RX_BUSY_POLL) 142 Approximate time in us to busy loop waiting for packets on the device queue. 143 This sets the default value of the SO_BUSY_POLL socket option. 144 Can be set or overridden per socket by setting socket option SO_BUSY_POLL, 145 which is the preferred method of enabling. If you need to enable the feature 146 globally via sysctl, a value of 50 is recommended. 147 Will increase power usage. 148 Default: 0 (off) 149 150 busy_poll 151 ---------------- 152 Low latency busy poll timeout for poll and select. (needs CONFIG_NET_RX_BUSY_POLL) 153 Approximate time in us to busy loop waiting for events. 154 Recommended value depends on the number of sockets you poll on. 155 For several sockets 50, for several hundreds 100. 156 For more than that you probably want to use epoll. 157 Note that only sockets with SO_BUSY_POLL set will be busy polled, 158 so you want to either selectively set SO_BUSY_POLL on those sockets or set 159 sysctl.net.busy_read globally. 160 Will increase power usage. 161 Default: 0 (off) 162 163 rmem_default 164 ------------ 165 166 The default setting of the socket receive buffer in bytes. 167 168 rmem_max 169 -------- 170 171 The maximum receive socket buffer size in bytes. 172 173 tstamp_allow_data 174 ----------------- 175 Allow processes to receive tx timestamps looped together with the original 176 packet contents. If disabled, transmit timestamp requests from unprivileged 177 processes are dropped unless socket option SOF_TIMESTAMPING_OPT_TSONLY is set. 178 Default: 1 (on) 179 180 181 wmem_default 182 ------------ 183 184 The default setting (in bytes) of the socket send buffer. 185 186 wmem_max 187 -------- 188 189 The maximum send socket buffer size in bytes. 190 191 message_burst and message_cost 192 ------------------------------ 193 194 These parameters are used to limit the warning messages written to the kernel 195 log from the networking code. They enforce a rate limit to make a 196 denial-of-service attack impossible. A higher message_cost factor, results in 197 fewer messages that will be written. Message_burst controls when messages will 198 be dropped. The default settings limit warning messages to one every five 199 seconds. 200 201 warnings 202 -------- 203 204 This sysctl is now unused. 205 206 This was used to control console messages from the networking stack that 207 occur because of problems on the network like duplicate address or bad 208 checksums. 209 210 These messages are now emitted at KERN_DEBUG and can generally be enabled 211 and controlled by the dynamic_debug facility. 212 213 netdev_budget 214 ------------- 215 216 Maximum number of packets taken from all interfaces in one polling cycle (NAPI 217 poll). In one polling cycle interfaces which are registered to polling are 218 probed in a round-robin manner. Also, a polling cycle may not exceed 219 netdev_budget_usecs microseconds, even if netdev_budget has not been 220 exhausted. 221 222 netdev_budget_usecs 223 --------------------- 224 225 Maximum number of microseconds in one NAPI polling cycle. Polling 226 will exit when either netdev_budget_usecs have elapsed during the 227 poll cycle or the number of packets processed reaches netdev_budget. 228 229 netdev_max_backlog 230 ------------------ 231 232 Maximum number of packets, queued on the INPUT side, when the interface 233 receives packets faster than kernel can process them. 234 235 netdev_rss_key 236 -------------- 237 238 RSS (Receive Side Scaling) enabled drivers use a 40 bytes host key that is 239 randomly generated. 240 Some user space might need to gather its content even if drivers do not 241 provide ethtool -x support yet. 242 243 myhost:~# cat /proc/sys/net/core/netdev_rss_key 244 84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47:25:42:97:74:ca:56:bb:b6:a1:d8: ... (52 bytes total) 245 246 File contains nul bytes if no driver ever called netdev_rss_key_fill() function. 247 Note: 248 /proc/sys/net/core/netdev_rss_key contains 52 bytes of key, 249 but most drivers only use 40 bytes of it. 250 251 myhost:~# ethtool -x eth0 252 RX flow hash indirection table for eth0 with 8 RX ring(s): 253 0: 0 1 2 3 4 5 6 7 254 RSS hash key: 255 84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47:25:42:97:74:ca:56:bb:b6:a1:d8:43:e3:c9:0c:fd:17:55:c2:3a:4d:69:ed:f1:42:89 256 257 netdev_tstamp_prequeue 258 ---------------------- 259 260 If set to 0, RX packet timestamps can be sampled after RPS processing, when 261 the target CPU processes packets. It might give some delay on timestamps, but 262 permit to distribute the load on several cpus. 263 264 If set to 1 (default), timestamps are sampled as soon as possible, before 265 queueing. 266 267 optmem_max 268 ---------- 269 270 Maximum ancillary buffer size allowed per socket. Ancillary data is a sequence 271 of struct cmsghdr structures with appended data. 272 273 2. /proc/sys/net/unix - Parameters for Unix domain sockets 274 ------------------------------------------------------- 275 276 There is only one file in this directory. 277 unix_dgram_qlen limits the max number of datagrams queued in Unix domain 278 socket's buffer. It will not take effect unless PF_UNIX flag is specified. 279 280 281 3. /proc/sys/net/ipv4 - IPV4 settings 282 ------------------------------------------------------- 283 Please see: Documentation/networking/ip-sysctl.txt and ipvs-sysctl.txt for 284 descriptions of these entries. 285 286 287 4. Appletalk 288 ------------------------------------------------------- 289 290 The /proc/sys/net/appletalk directory holds the Appletalk configuration data 291 when Appletalk is loaded. The configurable parameters are: 292 293 aarp-expiry-time 294 ---------------- 295 296 The amount of time we keep an ARP entry before expiring it. Used to age out 297 old hosts. 298 299 aarp-resolve-time 300 ----------------- 301 302 The amount of time we will spend trying to resolve an Appletalk address. 303 304 aarp-retransmit-limit 305 --------------------- 306 307 The number of times we will retransmit a query before giving up. 308 309 aarp-tick-time 310 -------------- 311 312 Controls the rate at which expires are checked. 313 314 The directory /proc/net/appletalk holds the list of active Appletalk sockets 315 on a machine. 316 317 The fields indicate the DDP type, the local address (in network:node format) 318 the remote address, the size of the transmit pending queue, the size of the 319 received queue (bytes waiting for applications to read) the state and the uid 320 owning the socket. 321 322 /proc/net/atalk_iface lists all the interfaces configured for appletalk.It 323 shows the name of the interface, its Appletalk address, the network range on 324 that address (or network number for phase 1 networks), and the status of the 325 interface. 326 327 /proc/net/atalk_route lists each known network route. It lists the target 328 (network) that the route leads to, the router (may be directly connected), the 329 route flags, and the device the route is using. 330 331 332 5. IPX 333 ------------------------------------------------------- 334 335 The IPX protocol has no tunable values in proc/sys/net. 336 337 The IPX protocol does, however, provide proc/net/ipx. This lists each IPX 338 socket giving the local and remote addresses in Novell format (that is 339 network:node:port). In accordance with the strange Novell tradition, 340 everything but the port is in hex. Not_Connected is displayed for sockets that 341 are not tied to a specific remote address. The Tx and Rx queue sizes indicate 342 the number of bytes pending for transmission and reception. The state 343 indicates the state the socket is in and the uid is the owning uid of the 344 socket. 345 346 The /proc/net/ipx_interface file lists all IPX interfaces. For each interface 347 it gives the network number, the node number, and indicates if the network is 348 the primary network. It also indicates which device it is bound to (or 349 Internal for internal networks) and the Frame Type if appropriate. Linux 350 supports 802.3, 802.2, 802.2 SNAP and DIX (Blue Book) ethernet framing for 351 IPX. 352 353 The /proc/net/ipx_route table holds a list of IPX routes. For each route it 354 gives the destination network, the router node (or Directly) and the network 355 address of the router (or Connected) for internal networks. 356 357 6. TIPC 358 ------------------------------------------------------- 359 360 tipc_rmem 361 ---------- 362 363 The TIPC protocol now has a tunable for the receive memory, similar to the 364 tcp_rmem - i.e. a vector of 3 INTEGERs: (min, default, max) 365 366 # cat /proc/sys/net/tipc/tipc_rmem 367 4252725 34021800 68043600 368 # 369 370 The max value is set to CONN_OVERLOAD_LIMIT, and the default and min values 371 are scaled (shifted) versions of that same value. Note that the min value 372 is not at this point in time used in any meaningful way, but the triplet is 373 preserved in order to be consistent with things like tcp_rmem. 374 375 named_timeout 376 -------------- 377 378 TIPC name table updates are distributed asynchronously in a cluster, without 379 any form of transaction handling. This means that different race scenarios are 380 possible. One such is that a name withdrawal sent out by one node and received 381 by another node may arrive after a second, overlapping name publication already 382 has been accepted from a third node, although the conflicting updates 383 originally may have been issued in the correct sequential order. 384 If named_timeout is nonzero, failed topology updates will be placed on a defer 385 queue until another event arrives that clears the error, or until the timeout 386 expires. Value is in milliseconds.