Skip to content

Commit 08dbc7a

Browse files
author
Alexei Starovoitov
committed
Merge branch 'AF_XDP-initial-support'
Björn Töpel says: ==================== This patch set introduces a new address family called AF_XDP that is optimized for high performance packet processing and, in upcoming patch sets, zero-copy semantics. In this patch set, we have removed all zero-copy related code in order to make it smaller, simpler and hopefully more review friendly. This patch set only supports copy-mode for the generic XDP path (XDP_SKB) for both RX and TX and copy-mode for RX using the XDP_DRV path. Zero-copy support requires XDP and driver changes that Jesper Dangaard Brouer is working on. Some of his work has already been accepted. We will publish our zero-copy support for RX and TX on top of his patch sets at a later point in time. An AF_XDP socket (XSK) is created with the normal socket() syscall. Associated with each XSK are two queues: the RX queue and the TX queue. A socket can receive packets on the RX queue and it can send packets on the TX queue. These queues are registered and sized with the setsockopts XDP_RX_RING and XDP_TX_RING, respectively. It is mandatory to have at least one of these queues for each socket. In contrast to AF_PACKET V2/V3 these descriptor queues are separated from packet buffers. An RX or TX descriptor points to a data buffer in a memory area called a UMEM. RX and TX can share the same UMEM so that a packet does not have to be copied between RX and TX. Moreover, if a packet needs to be kept for a while due to a possible retransmit, the descriptor that points to that packet can be changed to point to another and reused right away. This again avoids copying data. This new dedicated packet buffer area is call a UMEM. It consists of a number of equally size frames and each frame has a unique frame id. A descriptor in one of the queues references a frame by referencing its frame id. The user space allocates memory for this UMEM using whatever means it feels is most appropriate (malloc, mmap, huge pages, etc). This memory area is then registered with the kernel using the new setsockopt XDP_UMEM_REG. The UMEM also has two queues: the FILL queue and the COMPLETION queue. The fill queue is used by the application to send down frame ids for the kernel to fill in with RX packet data. References to these frames will then appear in the RX queue of the XSK once they have been received. The completion queue, on the other hand, contains frame ids that the kernel has transmitted completely and can now be used again by user space, for either TX or RX. Thus, the frame ids appearing in the completion queue are ids that were previously transmitted using the TX queue. In summary, the RX and FILL queues are used for the RX path and the TX and COMPLETION queues are used for the TX path. The socket is then finally bound with a bind() call to a device and a specific queue id on that device, and it is not until bind is completed that traffic starts to flow. Note that in this patch set, all packet data is copied out to user-space. A new feature in this patch set is that the UMEM can be shared between processes, if desired. If a process wants to do this, it simply skips the registration of the UMEM and its corresponding two queues, sets a flag in the bind call and submits the XSK of the process it would like to share UMEM with as well as its own newly created XSK socket. The new process will then receive frame id references in its own RX queue that point to this shared UMEM. Note that since the queue structures are single-consumer / single-producer (for performance reasons), the new process has to create its own socket with associated RX and TX queues, since it cannot share this with the other process. This is also the reason that there is only one set of FILL and COMPLETION queues per UMEM. It is the responsibility of a single process to handle the UMEM. If multiple-producer / multiple-consumer queues are implemented in the future, this requirement could be relaxed. How is then packets distributed between these two XSK? We have introduced a new BPF map called XSKMAP (or BPF_MAP_TYPE_XSKMAP in full). The user-space application can place an XSK at an arbitrary place in this map. The XDP program can then redirect a packet to a specific index in this map and at this point XDP validates that the XSK in that map was indeed bound to that device and queue number. If not, the packet is dropped. If the map is empty at that index, the packet is also dropped. This also means that it is currently mandatory to have an XDP program loaded (and one XSK in the XSKMAP) to be able to get any traffic to user space through the XSK. AF_XDP can operate in two different modes: XDP_SKB and XDP_DRV. If the driver does not have support for XDP, or XDP_SKB is explicitly chosen when loading the XDP program, XDP_SKB mode is employed that uses SKBs together with the generic XDP support and copies out the data to user space. A fallback mode that works for any network device. On the other hand, if the driver has support for XDP, it will be used by the AF_XDP code to provide better performance, but there is still a copy of the data into user space. There is a xdpsock benchmarking/test application included that demonstrates how to use AF_XDP sockets with both private and shared UMEMs. Say that you would like your UDP traffic from port 4242 to end up in queue 16, that we will enable AF_XDP on. Here, we use ethtool for this: ethtool -N p3p2 rx-flow-hash udp4 fn ethtool -N p3p2 flow-type udp4 src-port 4242 dst-port 4242 \ action 16 Running the rxdrop benchmark in XDP_DRV mode can then be done using: samples/bpf/xdpsock -i p3p2 -q 16 -r -N For XDP_SKB mode, use the switch "-S" instead of "-N" and all options can be displayed with "-h", as usual. We have run some benchmarks on a dual socket system with two Broadwell E5 2660 @ 2.0 GHz with hyperthreading turned off. Each socket has 14 cores which gives a total of 28, but only two cores are used in these experiments. One for TR/RX and one for the user space application. The memory is DDR4 @ 2133 MT/s (1067 MHz) and the size of each DIMM is 8192MB and with 8 of those DIMMs in the system we have 64 GB of total memory. The compiler used is gcc (Ubuntu 7.3.0-16ubuntu3) 7.3.0. The NIC is Intel I40E 40Gbit/s using the i40e driver. Below are the results in Mpps of the I40E NIC benchmark runs for 64 and 1500 byte packets, generated by a commercial packet generator HW outputing packets at full 40 Gbit/s line rate. The results are without retpoline so that we can compare against previous numbers. With retpoline, the AF_XDP numbers drop with between 10 - 15 percent. AF_XDP performance 64 byte packets. Results from V2 in parenthesis. Benchmark XDP_SKB XDP_DRV rxdrop 2.9(3.0) 9.6(9.5) txpush 2.6(2.5) NA* l2fwd 1.9(1.9) 2.5(2.5) (TX using XDP_SKB in both cases) AF_XDP performance 1500 byte packets: Benchmark XDP_SKB XDP_DRV rxdrop 2.1(2.2) 3.3(3.3) l2fwd 1.4(1.4) 1.8(1.8) (TX using XDP_SKB in both cases) * NA since we have no support for TX using the XDP_DRV infrastructure in this patch set. This is for a future patch set since it involves changes to the XDP NDOs. Some of this has been upstreamed by Jesper Dangaard Brouer. XDP performance on our system as a base line: 64 byte packets: XDP stats CPU pps issue-pps XDP-RX CPU 16 32.3(32.9)M 0 1500 byte packets: XDP stats CPU pps issue-pps XDP-RX CPU 16 3.3(3.3)M 0 Changes from V2: * Fixed a race in XSKMAP map found by Will. The code has been completely rearchitected and is now simpler, faster, and hopefully also not racy. Please review and check if it holds. If you would like to diff V2 against V3, you can find them here: https://github.com/bjoto/linux/tree/af-xdp-v2-on-bpf-next https://github.com/bjoto/linux/tree/af-xdp-v3-on-bpf-next The structure of the patch set is as follows: Patches 1-3: Basic socket and umem plumbing Patches 4-9: RX support together with the new XSKMAP Patches 10-13: TX support Patch 14: Statistics support with getsockopt() Patch 15: Sample application We based this patch set on bpf-next commit a3fe1f6 ("tools: bpftool: change time format for program 'loaded at:' information") To do for this patch set: * Syzkaller torture session being worked on Post-series plan: * Optimize performance * Kernel selftest * Kernel load module support of AF_XDP would be nice. Unclear how to achieve this though since our XDP code depends on net/core. * Support for AF_XDP sockets without an XPD program loaded. In this case all the traffic on a queue should go up to the user space socket. * Daniel Borkmann's suggestion for a "copy to XDP socket, and return XDP_PASS" for a tcpdump-like functionality. * And of course getting to zero-copy support in small increments, starting with TX then adding RX. Thanks: Björn and Magnus ==================== Acked-by: Willem de Bruijn <[email protected]> Acked-by: David S. Miller <[email protected]> Acked-by: Daniel Borkmann <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>
2 parents 03f5781 + b4b8faa commit 08dbc7a

36 files changed

+3221
-72
lines changed

Documentation/networking/af_xdp.rst

Lines changed: 297 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,297 @@
1+
.. SPDX-License-Identifier: GPL-2.0
2+
3+
======
4+
AF_XDP
5+
======
6+
7+
Overview
8+
========
9+
10+
AF_XDP is an address family that is optimized for high performance
11+
packet processing.
12+
13+
This document assumes that the reader is familiar with BPF and XDP. If
14+
not, the Cilium project has an excellent reference guide at
15+
http://cilium.readthedocs.io/en/doc-1.0/bpf/.
16+
17+
Using the XDP_REDIRECT action from an XDP program, the program can
18+
redirect ingress frames to other XDP enabled netdevs, using the
19+
bpf_redirect_map() function. AF_XDP sockets enable the possibility for
20+
XDP programs to redirect frames to a memory buffer in a user-space
21+
application.
22+
23+
An AF_XDP socket (XSK) is created with the normal socket()
24+
syscall. Associated with each XSK are two rings: the RX ring and the
25+
TX ring. A socket can receive packets on the RX ring and it can send
26+
packets on the TX ring. These rings are registered and sized with the
27+
setsockopts XDP_RX_RING and XDP_TX_RING, respectively. It is mandatory
28+
to have at least one of these rings for each socket. An RX or TX
29+
descriptor ring points to a data buffer in a memory area called a
30+
UMEM. RX and TX can share the same UMEM so that a packet does not have
31+
to be copied between RX and TX. Moreover, if a packet needs to be kept
32+
for a while due to a possible retransmit, the descriptor that points
33+
to that packet can be changed to point to another and reused right
34+
away. This again avoids copying data.
35+
36+
The UMEM consists of a number of equally size frames and each frame
37+
has a unique frame id. A descriptor in one of the rings references a
38+
frame by referencing its frame id. The user space allocates memory for
39+
this UMEM using whatever means it feels is most appropriate (malloc,
40+
mmap, huge pages, etc). This memory area is then registered with the
41+
kernel using the new setsockopt XDP_UMEM_REG. The UMEM also has two
42+
rings: the FILL ring and the COMPLETION ring. The fill ring is used by
43+
the application to send down frame ids for the kernel to fill in with
44+
RX packet data. References to these frames will then appear in the RX
45+
ring once each packet has been received. The completion ring, on the
46+
other hand, contains frame ids that the kernel has transmitted
47+
completely and can now be used again by user space, for either TX or
48+
RX. Thus, the frame ids appearing in the completion ring are ids that
49+
were previously transmitted using the TX ring. In summary, the RX and
50+
FILL rings are used for the RX path and the TX and COMPLETION rings
51+
are used for the TX path.
52+
53+
The socket is then finally bound with a bind() call to a device and a
54+
specific queue id on that device, and it is not until bind is
55+
completed that traffic starts to flow.
56+
57+
The UMEM can be shared between processes, if desired. If a process
58+
wants to do this, it simply skips the registration of the UMEM and its
59+
corresponding two rings, sets the XDP_SHARED_UMEM flag in the bind
60+
call and submits the XSK of the process it would like to share UMEM
61+
with as well as its own newly created XSK socket. The new process will
62+
then receive frame id references in its own RX ring that point to this
63+
shared UMEM. Note that since the ring structures are single-consumer /
64+
single-producer (for performance reasons), the new process has to
65+
create its own socket with associated RX and TX rings, since it cannot
66+
share this with the other process. This is also the reason that there
67+
is only one set of FILL and COMPLETION rings per UMEM. It is the
68+
responsibility of a single process to handle the UMEM.
69+
70+
How is then packets distributed from an XDP program to the XSKs? There
71+
is a BPF map called XSKMAP (or BPF_MAP_TYPE_XSKMAP in full). The
72+
user-space application can place an XSK at an arbitrary place in this
73+
map. The XDP program can then redirect a packet to a specific index in
74+
this map and at this point XDP validates that the XSK in that map was
75+
indeed bound to that device and ring number. If not, the packet is
76+
dropped. If the map is empty at that index, the packet is also
77+
dropped. This also means that it is currently mandatory to have an XDP
78+
program loaded (and one XSK in the XSKMAP) to be able to get any
79+
traffic to user space through the XSK.
80+
81+
AF_XDP can operate in two different modes: XDP_SKB and XDP_DRV. If the
82+
driver does not have support for XDP, or XDP_SKB is explicitly chosen
83+
when loading the XDP program, XDP_SKB mode is employed that uses SKBs
84+
together with the generic XDP support and copies out the data to user
85+
space. A fallback mode that works for any network device. On the other
86+
hand, if the driver has support for XDP, it will be used by the AF_XDP
87+
code to provide better performance, but there is still a copy of the
88+
data into user space.
89+
90+
Concepts
91+
========
92+
93+
In order to use an AF_XDP socket, a number of associated objects need
94+
to be setup.
95+
96+
Jonathan Corbet has also written an excellent article on LWN,
97+
"Accelerating networking with AF_XDP". It can be found at
98+
https://lwn.net/Articles/750845/.
99+
100+
UMEM
101+
----
102+
103+
UMEM is a region of virtual contiguous memory, divided into
104+
equal-sized frames. An UMEM is associated to a netdev and a specific
105+
queue id of that netdev. It is created and configured (frame size,
106+
frame headroom, start address and size) by using the XDP_UMEM_REG
107+
setsockopt system call. A UMEM is bound to a netdev and queue id, via
108+
the bind() system call.
109+
110+
An AF_XDP is socket linked to a single UMEM, but one UMEM can have
111+
multiple AF_XDP sockets. To share an UMEM created via one socket A,
112+
the next socket B can do this by setting the XDP_SHARED_UMEM flag in
113+
struct sockaddr_xdp member sxdp_flags, and passing the file descriptor
114+
of A to struct sockaddr_xdp member sxdp_shared_umem_fd.
115+
116+
The UMEM has two single-producer/single-consumer rings, that are used
117+
to transfer ownership of UMEM frames between the kernel and the
118+
user-space application.
119+
120+
Rings
121+
-----
122+
123+
There are a four different kind of rings: Fill, Completion, RX and
124+
TX. All rings are single-producer/single-consumer, so the user-space
125+
application need explicit synchronization of multiple
126+
processes/threads are reading/writing to them.
127+
128+
The UMEM uses two rings: Fill and Completion. Each socket associated
129+
with the UMEM must have an RX queue, TX queue or both. Say, that there
130+
is a setup with four sockets (all doing TX and RX). Then there will be
131+
one Fill ring, one Completion ring, four TX rings and four RX rings.
132+
133+
The rings are head(producer)/tail(consumer) based rings. A producer
134+
writes the data ring at the index pointed out by struct xdp_ring
135+
producer member, and increasing the producer index. A consumer reads
136+
the data ring at the index pointed out by struct xdp_ring consumer
137+
member, and increasing the consumer index.
138+
139+
The rings are configured and created via the _RING setsockopt system
140+
calls and mmapped to user-space using the appropriate offset to mmap()
141+
(XDP_PGOFF_RX_RING, XDP_PGOFF_TX_RING, XDP_UMEM_PGOFF_FILL_RING and
142+
XDP_UMEM_PGOFF_COMPLETION_RING).
143+
144+
The size of the rings need to be of size power of two.
145+
146+
UMEM Fill Ring
147+
~~~~~~~~~~~~~~
148+
149+
The Fill ring is used to transfer ownership of UMEM frames from
150+
user-space to kernel-space. The UMEM indicies are passed in the
151+
ring. As an example, if the UMEM is 64k and each frame is 4k, then the
152+
UMEM has 16 frames and can pass indicies between 0 and 15.
153+
154+
Frames passed to the kernel are used for the ingress path (RX rings).
155+
156+
The user application produces UMEM indicies to this ring.
157+
158+
UMEM Completetion Ring
159+
~~~~~~~~~~~~~~~~~~~~~~
160+
161+
The Completion Ring is used transfer ownership of UMEM frames from
162+
kernel-space to user-space. Just like the Fill ring, UMEM indicies are
163+
used.
164+
165+
Frames passed from the kernel to user-space are frames that has been
166+
sent (TX ring) and can be used by user-space again.
167+
168+
The user application consumes UMEM indicies from this ring.
169+
170+
171+
RX Ring
172+
~~~~~~~
173+
174+
The RX ring is the receiving side of a socket. Each entry in the ring
175+
is a struct xdp_desc descriptor. The descriptor contains UMEM index
176+
(idx), the length of the data (len), the offset into the frame
177+
(offset).
178+
179+
If no frames have been passed to kernel via the Fill ring, no
180+
descriptors will (or can) appear on the RX ring.
181+
182+
The user application consumes struct xdp_desc descriptors from this
183+
ring.
184+
185+
TX Ring
186+
~~~~~~~
187+
188+
The TX ring is used to send frames. The struct xdp_desc descriptor is
189+
filled (index, length and offset) and passed into the ring.
190+
191+
To start the transfer a sendmsg() system call is required. This might
192+
be relaxed in the future.
193+
194+
The user application produces struct xdp_desc descriptors to this
195+
ring.
196+
197+
XSKMAP / BPF_MAP_TYPE_XSKMAP
198+
----------------------------
199+
200+
On XDP side there is a BPF map type BPF_MAP_TYPE_XSKMAP (XSKMAP) that
201+
is used in conjunction with bpf_redirect_map() to pass the ingress
202+
frame to a socket.
203+
204+
The user application inserts the socket into the map, via the bpf()
205+
system call.
206+
207+
Note that if an XDP program tries to redirect to a socket that does
208+
not match the queue configuration and netdev, the frame will be
209+
dropped. E.g. an AF_XDP socket is bound to netdev eth0 and
210+
queue 17. Only the XDP program executing for eth0 and queue 17 will
211+
successfully pass data to the socket. Please refer to the sample
212+
application (samples/bpf/) in for an example.
213+
214+
Usage
215+
=====
216+
217+
In order to use AF_XDP sockets there are two parts needed. The
218+
user-space application and the XDP program. For a complete setup and
219+
usage example, please refer to the sample application. The user-space
220+
side is xdpsock_user.c and the XDP side xdpsock_kern.c.
221+
222+
Naive ring dequeue and enqueue could look like this::
223+
224+
// typedef struct xdp_rxtx_ring RING;
225+
// typedef struct xdp_umem_ring RING;
226+
227+
// typedef struct xdp_desc RING_TYPE;
228+
// typedef __u32 RING_TYPE;
229+
230+
int dequeue_one(RING *ring, RING_TYPE *item)
231+
{
232+
__u32 entries = ring->ptrs.producer - ring->ptrs.consumer;
233+
234+
if (entries == 0)
235+
return -1;
236+
237+
// read-barrier!
238+
239+
*item = ring->desc[ring->ptrs.consumer & (RING_SIZE - 1)];
240+
ring->ptrs.consumer++;
241+
return 0;
242+
}
243+
244+
int enqueue_one(RING *ring, const RING_TYPE *item)
245+
{
246+
u32 free_entries = RING_SIZE - (ring->ptrs.producer - ring->ptrs.consumer);
247+
248+
if (free_entries == 0)
249+
return -1;
250+
251+
ring->desc[ring->ptrs.producer & (RING_SIZE - 1)] = *item;
252+
253+
// write-barrier!
254+
255+
ring->ptrs.producer++;
256+
return 0;
257+
}
258+
259+
260+
For a more optimized version, please refer to the sample application.
261+
262+
Sample application
263+
==================
264+
265+
There is a xdpsock benchmarking/test application included that
266+
demonstrates how to use AF_XDP sockets with both private and shared
267+
UMEMs. Say that you would like your UDP traffic from port 4242 to end
268+
up in queue 16, that we will enable AF_XDP on. Here, we use ethtool
269+
for this::
270+
271+
ethtool -N p3p2 rx-flow-hash udp4 fn
272+
ethtool -N p3p2 flow-type udp4 src-port 4242 dst-port 4242 \
273+
action 16
274+
275+
Running the rxdrop benchmark in XDP_DRV mode can then be done
276+
using::
277+
278+
samples/bpf/xdpsock -i p3p2 -q 16 -r -N
279+
280+
For XDP_SKB mode, use the switch "-S" instead of "-N" and all options
281+
can be displayed with "-h", as usual.
282+
283+
Credits
284+
=======
285+
286+
- Björn Töpel (AF_XDP core)
287+
- Magnus Karlsson (AF_XDP core)
288+
- Alexander Duyck
289+
- Alexei Starovoitov
290+
- Daniel Borkmann
291+
- Jesper Dangaard Brouer
292+
- John Fastabend
293+
- Jonathan Corbet (LWN coverage)
294+
- Michael S. Tsirkin
295+
- Qi Z Zhang
296+
- Willem de Bruijn
297+

Documentation/networking/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@ Contents:
66
.. toctree::
77
:maxdepth: 2
88

9+
af_xdp
910
batman-adv
1011
can
1112
dpaa2/index

MAINTAINERS

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15424,6 +15424,14 @@ T: git git://linuxtv.org/media_tree.git
1542415424
S: Maintained
1542515425
F: drivers/media/tuners/tuner-xc2028.*
1542615426

15427+
XDP SOCKETS (AF_XDP)
15428+
M: Björn Töpel <[email protected]>
15429+
M: Magnus Karlsson <[email protected]>
15430+
15431+
S: Maintained
15432+
F: kernel/bpf/xskmap.c
15433+
F: net/xdp/
15434+
1542715435
XEN BLOCK SUBSYSTEM
1542815436
M: Konrad Rzeszutek Wilk <[email protected]>
1542915437
M: Roger Pau Monné <[email protected]>

include/linux/bpf.h

Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -676,6 +676,31 @@ static inline int sock_map_prog(struct bpf_map *map,
676676
}
677677
#endif
678678

679+
#if defined(CONFIG_XDP_SOCKETS)
680+
struct xdp_sock;
681+
struct xdp_sock *__xsk_map_lookup_elem(struct bpf_map *map, u32 key);
682+
int __xsk_map_redirect(struct bpf_map *map, struct xdp_buff *xdp,
683+
struct xdp_sock *xs);
684+
void __xsk_map_flush(struct bpf_map *map);
685+
#else
686+
struct xdp_sock;
687+
static inline struct xdp_sock *__xsk_map_lookup_elem(struct bpf_map *map,
688+
u32 key)
689+
{
690+
return NULL;
691+
}
692+
693+
static inline int __xsk_map_redirect(struct bpf_map *map, struct xdp_buff *xdp,
694+
struct xdp_sock *xs)
695+
{
696+
return -EOPNOTSUPP;
697+
}
698+
699+
static inline void __xsk_map_flush(struct bpf_map *map)
700+
{
701+
}
702+
#endif
703+
679704
/* verifier prototypes for helper functions called from eBPF programs */
680705
extern const struct bpf_func_proto bpf_map_lookup_elem_proto;
681706
extern const struct bpf_func_proto bpf_map_update_elem_proto;

include/linux/bpf_types.h

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -49,4 +49,7 @@ BPF_MAP_TYPE(BPF_MAP_TYPE_DEVMAP, dev_map_ops)
4949
BPF_MAP_TYPE(BPF_MAP_TYPE_SOCKMAP, sock_map_ops)
5050
#endif
5151
BPF_MAP_TYPE(BPF_MAP_TYPE_CPUMAP, cpu_map_ops)
52+
#if defined(CONFIG_XDP_SOCKETS)
53+
BPF_MAP_TYPE(BPF_MAP_TYPE_XSKMAP, xsk_map_ops)
54+
#endif
5255
#endif

include/linux/filter.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -760,7 +760,7 @@ struct bpf_prog *bpf_patch_insn_single(struct bpf_prog *prog, u32 off,
760760
* This does not appear to be a real limitation for existing software.
761761
*/
762762
int xdp_do_generic_redirect(struct net_device *dev, struct sk_buff *skb,
763-
struct bpf_prog *prog);
763+
struct xdp_buff *xdp, struct bpf_prog *prog);
764764
int xdp_do_redirect(struct net_device *dev,
765765
struct xdp_buff *xdp,
766766
struct bpf_prog *prog);

include/linux/netdevice.h

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2486,6 +2486,7 @@ void dev_disable_lro(struct net_device *dev);
24862486
int dev_loopback_xmit(struct net *net, struct sock *sk, struct sk_buff *newskb);
24872487
int dev_queue_xmit(struct sk_buff *skb);
24882488
int dev_queue_xmit_accel(struct sk_buff *skb, void *accel_priv);
2489+
int dev_direct_xmit(struct sk_buff *skb, u16 queue_id);
24892490
int register_netdevice(struct net_device *dev);
24902491
void unregister_netdevice_queue(struct net_device *dev, struct list_head *head);
24912492
void unregister_netdevice_many(struct list_head *head);

0 commit comments

Comments
 (0)