1 This file documents how to use memory mapped I/O with netlink.
3 Author: Patrick McHardy <kaber@trash.net>
8 Memory mapped netlink I/O can be used to increase throughput and decrease
9 overhead of unicast receive and transmit operations. Some netlink subsystems
10 require high throughput, these are mainly the netfilter subsystems
11 nfnetlink_queue and nfnetlink_log, but it can also help speed up large
12 dump operations of f.i. the routing database.
14 Memory mapped netlink I/O used two circular ring buffers for RX and TX which
15 are mapped into the processes address space.
17 The RX ring is used by the kernel to directly construct netlink messages into
18 user-space memory without copying them as done with regular socket I/O,
19 additionally as long as the ring contains messages no recvmsg() or poll()
20 syscalls have to be issued by user-space to get more message.
22 The TX ring is used to process messages directly from user-space memory, the
23 kernel processes all messages contained in the ring using a single sendmsg()
29 In order to use memory mapped netlink I/O, user-space needs three main changes:
32 - conversion of the RX path to get messages from the ring instead of recvmsg()
33 - conversion of the TX path to construct messages into the ring
35 Ring setup is done using setsockopt() to provide the ring parameters to the
36 kernel, then a call to mmap() to map the ring into the processes address space:
38 - setsockopt(fd, SOL_NETLINK, NETLINK_RX_RING, ¶ms, sizeof(params));
39 - setsockopt(fd, SOL_NETLINK, NETLINK_TX_RING, ¶ms, sizeof(params));
40 - ring = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0)
42 Usage of either ring is optional, but even if only the RX ring is used the
43 mapping still needs to be writable in order to update the frame status after
46 Conversion of the reception path involves calling poll() on the file
47 descriptor, once the socket is readable the frames from the ring are
48 processed in order until no more messages are available, as indicated by
49 a status word in the frame header.
51 On kernel side, in order to make use of memory mapped I/O on receive, the
52 originating netlink subsystem needs to support memory mapped I/O, otherwise
53 it will use an allocated socket buffer as usual and the contents will be
54 copied to the ring on transmission, nullifying most of the performance gains.
55 Dumps of kernel databases automatically support memory mapped I/O.
57 Conversion of the transmit path involves changing message construction to
58 use memory from the TX ring instead of (usually) a buffer declared on the
59 stack and setting up the frame header appropriately. Optionally poll() can
60 be used to wait for free frames in the TX ring.
62 Structured and definitions for using memory mapped I/O are contained in
68 Each ring contains a number of continuous memory blocks, containing frames of
69 fixed size dependent on the parameters used for ring setup.
82 The blocks are only visible to the kernel, from the point of view of user-space
83 the ring just contains the frames in a continuous memory zone.
85 The ring parameters used for setting up the ring are defined as follows:
88 unsigned int nm_block_size;
89 unsigned int nm_block_nr;
90 unsigned int nm_frame_size;
91 unsigned int nm_frame_nr;
94 Frames are grouped into blocks, where each block is a continuous region of memory
95 and holds nm_block_size / nm_frame_size frames. The total number of frames in
96 the ring is nm_frame_nr. The following invariants hold:
98 - frames_per_block = nm_block_size / nm_frame_size
100 - nm_frame_nr = frames_per_block * nm_block_nr
102 Some parameters are constrained, specifically:
104 - nm_block_size must be a multiple of the architectures memory page size.
105 The getpagesize() function can be used to get the page size.
107 - nm_frame_size must be equal or larger to NL_MMAP_HDRLEN, IOW a frame must be
108 able to hold at least the frame header
110 - nm_frame_size must be smaller or equal to nm_block_size
112 - nm_frame_size must be a multiple of NL_MMAP_MSG_ALIGNMENT
114 - nm_frame_nr must equal the actual number of frames as specified above.
116 When the kernel can't allocate physically continuous memory for a ring block,
117 it will fall back to use physically discontinuous memory. This might affect
118 performance negatively, in order to avoid this the nm_frame_size parameter
119 should be chosen to be as small as possible for the required frame size and
120 the number of blocks should be increased instead.
125 Each frames contain a frame header, consisting of a synchronization word and some
126 meta-data, and the message itself.
128 Frame: [ header message ]
130 The frame header is defined as follows:
133 unsigned int nm_status;
142 - nm_status is used for synchronizing processing between the kernel and user-
143 space and specifies ownership of the frame as well as the operation to perform
145 - nm_len contains the length of the message contained in the data area
147 - nm_group specified the destination multicast group of message
149 - nm_pid, nm_uid and nm_gid contain the netlink pid, UID and GID of the sending
150 process. These values correspond to the data available using SOCK_PASSCRED in
151 the SCM_CREDENTIALS cmsg.
153 The possible values in the status word are:
155 - NL_MMAP_STATUS_UNUSED:
156 RX ring: frame belongs to the kernel and contains no message
157 for user-space. Approriate action is to invoke poll()
158 to wait for new messages.
160 TX ring: frame belongs to user-space and can be used for
161 message construction.
163 - NL_MMAP_STATUS_RESERVED:
164 RX ring only: frame is currently used by the kernel for message
165 construction and contains no valid message yet.
166 Appropriate action is to invoke poll() to wait for
169 - NL_MMAP_STATUS_VALID:
170 RX ring: frame contains a valid message. Approriate action is
171 to process the message and release the frame back to
172 the kernel by setting the status to
173 NL_MMAP_STATUS_UNUSED or queue the frame by setting the
174 status to NL_MMAP_STATUS_SKIP.
176 TX ring: the frame contains a valid message from user-space to
177 be processed by the kernel. After completing processing
178 the kernel will release the frame back to user-space by
179 setting the status to NL_MMAP_STATUS_UNUSED.
181 - NL_MMAP_STATUS_COPY:
182 RX ring only: a message is ready to be processed but could not be
183 stored in the ring, either because it exceeded the
184 frame size or because the originating subsystem does
185 not support memory mapped I/O. Appropriate action is
186 to invoke recvmsg() to receive the message and release
187 the frame back to the kernel by setting the status to
188 NL_MMAP_STATUS_UNUSED.
190 - NL_MMAP_STATUS_SKIP:
191 RX ring only: user-space queued the message for later processing, but
192 processed some messages following it in the ring. The
193 kernel should skip this frame when looking for unused
196 The data area of a frame begins at a offset of NL_MMAP_HDRLEN relative to the
202 As of Jan 2015 the message is always copied from the ring frame to an
203 allocated buffer due to unresolved security concerns.
204 See commit 4682a0358639b29cf ("netlink: Always copy on mmap TX.").
211 unsigned int block_size = 16 * getpagesize();
212 struct nl_mmap_req req = {
213 .nm_block_size = block_size,
215 .nm_frame_size = 16384,
216 .nm_frame_nr = 64 * block_size / 16384,
218 unsigned int ring_size;
219 void *rx_ring, *tx_ring;
221 /* Configure ring parameters */
222 if (setsockopt(fd, SOL_NETLINK, NETLINK_RX_RING, &req, sizeof(req)) < 0)
224 if (setsockopt(fd, SOL_NETLINK, NETLINK_TX_RING, &req, sizeof(req)) < 0)
227 /* Calculate size of each individual ring */
228 ring_size = req.nm_block_nr * req.nm_block_size;
230 /* Map RX/TX rings. The TX ring is located after the RX ring */
231 rx_ring = mmap(NULL, 2 * ring_size, PROT_READ | PROT_WRITE,
233 if ((long)rx_ring == -1L)
235 tx_ring = rx_ring + ring_size:
239 This example assumes some ring parameters of the ring setup are available.
241 unsigned int frame_offset = 0;
242 struct nl_mmap_hdr *hdr;
243 struct nlmsghdr *nlh;
244 unsigned char buf[16384];
248 struct pollfd pfds[1];
251 pfds[0].events = POLLIN | POLLERR;
254 if (poll(pfds, 1, -1) < 0 && errno != -EINTR)
257 /* Check for errors. Error handling omitted */
258 if (pfds[0].revents & POLLERR)
261 /* If no new messages, poll again */
262 if (!(pfds[0].revents & POLLIN))
265 /* Process all frames */
267 /* Get next frame header */
268 hdr = rx_ring + frame_offset;
270 if (hdr->nm_status == NL_MMAP_STATUS_VALID) {
271 /* Regular memory mapped frame */
272 nlh = (void *)hdr + NL_MMAP_HDRLEN;
275 /* Release empty message immediately. May happen
276 * on error during message construction.
280 } else if (hdr->nm_status == NL_MMAP_STATUS_COPY) {
281 /* Frame queued to socket receive queue */
282 len = recv(fd, buf, sizeof(buf), MSG_DONTWAIT);
287 /* No more messages to process, continue polling */
292 /* Release frame back to the kernel */
293 hdr->nm_status = NL_MMAP_STATUS_UNUSED;
295 /* Advance frame offset to next frame */
296 frame_offset = (frame_offset + frame_size) % ring_size;
300 Message transmission:
302 This example assumes some ring parameters of the ring setup are available.
303 A single message is constructed and transmitted, to send multiple messages
304 at once they would be constructed in consecutive frames before a final call
307 unsigned int frame_offset = 0;
308 struct nl_mmap_hdr *hdr;
309 struct nlmsghdr *nlh;
310 struct sockaddr_nl addr = {
311 .nl_family = AF_NETLINK,
314 hdr = tx_ring + frame_offset;
315 if (hdr->nm_status != NL_MMAP_STATUS_UNUSED)
316 /* No frame available. Use poll() to avoid. */
319 nlh = (void *)hdr + NL_MMAP_HDRLEN;
324 /* Fill frame header: length and status need to be set */
325 hdr->nm_len = nlh->nlmsg_len;
326 hdr->nm_status = NL_MMAP_STATUS_VALID;
328 if (sendto(fd, NULL, 0, 0, &addr, sizeof(addr)) < 0)
331 /* Advance frame offset to next frame */
332 frame_offset = (frame_offset + frame_size) % ring_size;