Skip to content

Commit b7f9ef7

Browse files
committed
Merge branch 'per-epoll-context-busy-poll'
Joe Damato says: ==================== Per epoll context busy poll support Greetings: Welcome to v8. TL;DR This builds on commit bf3b9f6 ("epoll: Add busy poll support to epoll with socket fds.") by allowing user applications to enable epoll-based busy polling, set a busy poll packet budget, and enable or disable prefer busy poll on a per epoll context basis. This makes epoll-based busy polling much more usable for user applications than the current system-wide sysctl and hardcoded budget. To allow for this, two ioctls have been added for epoll contexts for getting and setting a new struct, struct epoll_params. ioctl was chosen vs a new syscall after reviewing a suggestion by Willem de Bruijn [1]. I am open to using a new syscall instead of an ioctl, but it seemed that: - Busy poll affects all existing epoll_wait and epoll_pwait variants in the same way, so new verions of many syscalls might be needed. It seems much simpler for users to use the correct epoll_wait/epoll_pwait for their app and add a call to ioctl to enable or disable busy poll as needed. This also probably means less work to get an existing epoll app using busy poll. - previously added epoll_pwait2 helped to bring epoll closer to existing syscalls (like pselect and ppoll) and this busy poll change reflected as a new syscall would not have the same effect. Note: patch 1/4 as of v4 uses an or (||) instead of an xor. I thought about it some more and I realized that if the user enables both the per-epoll context setting and the system wide sysctl, then busy poll should be enabled and not disabled. Using xor doesn't seem to make much sense after thinking through this a bit. Longer explanation: Presently epoll has support for a very useful form of busy poll based on the incoming NAPI ID (see also: SO_INCOMING_NAPI_ID [2]). This form of busy poll allows epoll_wait to drive NAPI packet processing which allows for a few interesting user application designs which can reduce latency and also potentially improve L2/L3 cache hit rates by deferring NAPI until userland has finished its work. The documentation available on this is, IMHO, a bit confusing so please allow me to explain how one might use this: 1. Ensure each application thread has its own epoll instance mapping 1-to-1 with NIC RX queues. An n-tuple filter would likely be used to direct connections with specific dest ports to these queues. 2. Optionally: Setup IRQ coalescing for the NIC RX queues where busy polling will occur. This can help avoid the userland app from being pre-empted by a hard IRQ while userland is running. Note this means that userland must take care to call epoll_wait and not take too long in userland since it now drives NAPI via epoll_wait. 3. Optionally: Consider using napi_defer_hard_irqs and gro_flush_timeout to further restrict IRQ generation from the NIC. These settings are system-wide so their impact must be carefully weighed against the running applications. 4. Ensure that all incoming connections added to an epoll instance have the same NAPI ID. This can be done with a BPF filter when SO_REUSEPORT is used or getsockopt + SO_INCOMING_NAPI_ID when a single accept thread is used which dispatches incoming connections to threads. 5. Lastly, busy poll must be enabled via a sysctl (/proc/sys/net/core/busy_poll). Please see Eric Dumazet's paper about busy polling [3] and a recent academic paper about measured performance improvements of busy polling [4] (albeit with a modification that is not currently present in the kernel) for additional context. The unfortunate part about step 5 above is that this enables busy poll system-wide which affects all user applications on the system, including epoll-based network applications which were not intended to be used this way or applications where increased CPU usage for lower latency network processing is unnecessary or not desirable. If the user wants to run one low latency epoll-based server application with epoll-based busy poll, but would like to run the rest of the applications on the system (which may also use epoll) without busy poll, this system-wide sysctl presents a significant problem. This change preserves the system-wide sysctl, but adds a mechanism (via ioctl) to enable or disable busy poll for epoll contexts as needed by individual applications, making epoll-based busy poll more usable. Note that this change includes an or (as of v4) instead of an xor. If the user has enabled both the system-wide sysctl and also the per epoll-context busy poll settings, then epoll should probably busy poll (vs being disabled). Thanks, Joe v7 -> v8: - Reviewed-by tag from Eric Dumazet applied to commit message of patch 1/4. - patch 4/4: - EPIOCSPARAMS and EPIOCGPARAMS updated to use WRITE_ONCE and READ_ONCE, as requested by Eric Dumazet - Wrapped a long line (via netdev/checkpatch) v6 -> v7: - Acked-by tags from Stanislav Fomichev applied to commit messages of all patches. - Reviewed-by tags from Jakub Kicinski, Eric Dumazet applied to commit messages of patches 2 and 3. Jiri Slaby's Reviewed-by applied to patch 4. - patch 1/4: - busy_poll_usecs reduced from u64 to u32. - Unnecessary parens removed (via netdev/checkpatch) - Wrapped long line (via netdev/checkpatch) - Remove inline from busy_loop_ep_timeout as objdump suggests the function is already inlined - Moved struct eventpoll assignment to declaration - busy_loop_ep_timeout is moved within CONFIG_NET_RX_BUSY_POLL and the ifdefs internally have been removed as per Eric Dumazet's review - Removed ep_busy_loop_on from the !defined CONFIG_NET_RX_BUSY_POLL section as it is only called when CONFIG_NET_RX_BUSY_POLL is defined - patch 3/4: - Fix whitespace alignment issue (via netdev/checkpatch) - patch 4/4: - epoll_params.busy_poll_usecs has been reduced to u32 - epoll_params.busy_poll_usecs is now checked to ensure it is <= S32_MAX - __pad has been reduced to a single u8 - memchr_inv has been dropped and replaced with a simple check for the single __pad byte - Removed space after cast (via netdev/checkpatch) - Wrap long line (via netdev/checkpatch) - Move struct eventpoll *ep assignment to declaration as per Jiri Slaby's review - Remove unnecessary !! as per Jiri Slaby's review - Reorganized variables to be reverse christmas tree order v5 -> v6: - patch 1/3 no functional change, but commit message corrected to explain that an or (||) is being used instead of xor. - patch 3/4 is a new patch which adds support for per epoll context prefer busy poll setting. - patch 4/4 updated to allow getting/setting per epoll context prefer busy poll setting; this setting is limited to either 0 or 1. v4 -> v5: - patch 3/3 updated to use memchr_inv to ensure that __pad is zero for the EPIOCSPARAMS ioctl. Recommended by Greg K-H [5], Dave Chinner [6], and Jiri Slaby [7]. v3 -> v4: - patch 1/3 was updated to include an important functional change: ep_busy_loop_on was updated to use or (||) instead of xor (^). After thinking about it a bit more, I thought xor didn't make much sense. Enabling both the per-epoll context and the system-wide sysctl should probably enable busy poll, not disable it. So, or (||) makes more sense, I think. - patch 3/3 was updated: - to change the epoll_params fields to be __u64, __u16, and __u8 and to pad the struct to a multiple of 64bits. Suggested by Greg K-H [8] and Arnd Bergmann [9]. - remove an unused pr_fmt, left over from the previous revision. - ioctl now returns -EINVAL when epoll_params.busy_poll_usecs > U32_MAX. v2 -> v3: - cover letter updated to mention why ioctl seems (to me) like a better choice vs a new syscall. - patch 3/4 was modified in 3 ways: - when an unknown ioctl is received, -ENOIOCTLCMD is returned instead of -EINVAL as the ioctl documentation requires. - epoll_params.busy_poll_budget can only be set to a value larger than NAPI_POLL_WEIGHT if code is run by privileged (CAP_NET_ADMIN) users. Otherwise, -EPERM is returned. - busy poll specific ioctl code moved out to its own function. On kernels without busy poll support, -EOPNOTSUPP is returned. This also makes the kernel build robot happier without littering the code with more #ifdefs. - dropped patch 4/4 after Eric Dumazet's review of it when it was sent independently to the list [10]. v1 -> v2: - cover letter updated to make a mention of napi_defer_hard_irqs and gro_flush_timeout as an added step 3 and to cite both Eric Dumazet's busy polling paper and a paper from University of Waterloo for additional context. Specifically calling out the xor in patch 1/4 incase it is missed by reviewers. - Patch 2/4 has its commit message updated, but no functional changes. Commit message now describes that allowing for a settable budget helps to improve throughput and is more consistent with other busy poll mechanisms that allow a settable budget via SO_BUSY_POLL_BUDGET. - Patch 3/4 was modified to check if the epoll_params.busy_poll_budget exceeds NAPI_POLL_WEIGHT. The larger value is allowed, but an error is printed. This was done for consistency with netif_napi_add_weight, which does the same. - Patch 3/4 the struct epoll_params was updated to fix the type of the data field; it was uint8_t and was changed to u8. - Patch 4/4 added to check if SO_BUSY_POLL_BUDGET exceeds NAPI_POLL_WEIGHT. The larger value is allowed, but an error is printed. This was done for consistency with netif_napi_add_weight, which does the same. ==================== Signed-off-by: David S. Miller <[email protected]>
2 parents 723615a + 18e2bf0 commit b7f9ef7

File tree

3 files changed

+138
-7
lines changed

3 files changed

+138
-7
lines changed

Documentation/userspace-api/ioctl/ioctl-number.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -309,6 +309,7 @@ Code Seq# Include File Comments
309309
0x89 0B-DF linux/sockios.h
310310
0x89 E0-EF linux/sockios.h SIOCPROTOPRIVATE range
311311
0x89 F0-FF linux/sockios.h SIOCDEVPRIVATE range
312+
0x8A 00-1F linux/eventpoll.h
312313
0x8B all linux/wireless.h
313314
0x8C 00-3F WiNRADiO driver
314315
<http://www.winradio.com.au/>

fs/eventpoll.c

Lines changed: 124 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,7 @@
3737
#include <linux/seq_file.h>
3838
#include <linux/compat.h>
3939
#include <linux/rculist.h>
40+
#include <linux/capability.h>
4041
#include <net/busy_poll.h>
4142

4243
/*
@@ -227,6 +228,11 @@ struct eventpoll {
227228
#ifdef CONFIG_NET_RX_BUSY_POLL
228229
/* used to track busy poll napi_id */
229230
unsigned int napi_id;
231+
/* busy poll timeout */
232+
u32 busy_poll_usecs;
233+
/* busy poll packet budget */
234+
u16 busy_poll_budget;
235+
bool prefer_busy_poll;
230236
#endif
231237

232238
#ifdef CONFIG_DEBUG_LOCK_ALLOC
@@ -387,11 +393,41 @@ static inline int ep_events_available(struct eventpoll *ep)
387393
}
388394

389395
#ifdef CONFIG_NET_RX_BUSY_POLL
396+
/**
397+
* busy_loop_ep_timeout - check if busy poll has timed out. The timeout value
398+
* from the epoll instance ep is preferred, but if it is not set fallback to
399+
* the system-wide global via busy_loop_timeout.
400+
*
401+
* @start_time: The start time used to compute the remaining time until timeout.
402+
* @ep: Pointer to the eventpoll context.
403+
*
404+
* Return: true if the timeout has expired, false otherwise.
405+
*/
406+
static bool busy_loop_ep_timeout(unsigned long start_time,
407+
struct eventpoll *ep)
408+
{
409+
unsigned long bp_usec = READ_ONCE(ep->busy_poll_usecs);
410+
411+
if (bp_usec) {
412+
unsigned long end_time = start_time + bp_usec;
413+
unsigned long now = busy_loop_current_time();
414+
415+
return time_after(now, end_time);
416+
} else {
417+
return busy_loop_timeout(start_time);
418+
}
419+
}
420+
421+
static bool ep_busy_loop_on(struct eventpoll *ep)
422+
{
423+
return !!ep->busy_poll_usecs || net_busy_loop_on();
424+
}
425+
390426
static bool ep_busy_loop_end(void *p, unsigned long start_time)
391427
{
392428
struct eventpoll *ep = p;
393429

394-
return ep_events_available(ep) || busy_loop_timeout(start_time);
430+
return ep_events_available(ep) || busy_loop_ep_timeout(start_time, ep);
395431
}
396432

397433
/*
@@ -403,10 +439,15 @@ static bool ep_busy_loop_end(void *p, unsigned long start_time)
403439
static bool ep_busy_loop(struct eventpoll *ep, int nonblock)
404440
{
405441
unsigned int napi_id = READ_ONCE(ep->napi_id);
442+
u16 budget = READ_ONCE(ep->busy_poll_budget);
443+
bool prefer_busy_poll = READ_ONCE(ep->prefer_busy_poll);
444+
445+
if (!budget)
446+
budget = BUSY_POLL_BUDGET;
406447

407-
if ((napi_id >= MIN_NAPI_ID) && net_busy_loop_on()) {
408-
napi_busy_loop(napi_id, nonblock ? NULL : ep_busy_loop_end, ep, false,
409-
BUSY_POLL_BUDGET);
448+
if (napi_id >= MIN_NAPI_ID && ep_busy_loop_on(ep)) {
449+
napi_busy_loop(napi_id, nonblock ? NULL : ep_busy_loop_end,
450+
ep, prefer_busy_poll, budget);
410451
if (ep_events_available(ep))
411452
return true;
412453
/*
@@ -425,12 +466,12 @@ static bool ep_busy_loop(struct eventpoll *ep, int nonblock)
425466
*/
426467
static inline void ep_set_busy_poll_napi_id(struct epitem *epi)
427468
{
428-
struct eventpoll *ep;
469+
struct eventpoll *ep = epi->ep;
429470
unsigned int napi_id;
430471
struct socket *sock;
431472
struct sock *sk;
432473

433-
if (!net_busy_loop_on())
474+
if (!ep_busy_loop_on(ep))
434475
return;
435476

436477
sock = sock_from_file(epi->ffd.file);
@@ -442,7 +483,6 @@ static inline void ep_set_busy_poll_napi_id(struct epitem *epi)
442483
return;
443484

444485
napi_id = READ_ONCE(sk->sk_napi_id);
445-
ep = epi->ep;
446486

447487
/* Non-NAPI IDs can be rejected
448488
* or
@@ -455,6 +495,49 @@ static inline void ep_set_busy_poll_napi_id(struct epitem *epi)
455495
ep->napi_id = napi_id;
456496
}
457497

498+
static long ep_eventpoll_bp_ioctl(struct file *file, unsigned int cmd,
499+
unsigned long arg)
500+
{
501+
struct eventpoll *ep = file->private_data;
502+
void __user *uarg = (void __user *)arg;
503+
struct epoll_params epoll_params;
504+
505+
switch (cmd) {
506+
case EPIOCSPARAMS:
507+
if (copy_from_user(&epoll_params, uarg, sizeof(epoll_params)))
508+
return -EFAULT;
509+
510+
/* pad byte must be zero */
511+
if (epoll_params.__pad)
512+
return -EINVAL;
513+
514+
if (epoll_params.busy_poll_usecs > S32_MAX)
515+
return -EINVAL;
516+
517+
if (epoll_params.prefer_busy_poll > 1)
518+
return -EINVAL;
519+
520+
if (epoll_params.busy_poll_budget > NAPI_POLL_WEIGHT &&
521+
!capable(CAP_NET_ADMIN))
522+
return -EPERM;
523+
524+
WRITE_ONCE(ep->busy_poll_usecs, epoll_params.busy_poll_usecs);
525+
WRITE_ONCE(ep->busy_poll_budget, epoll_params.busy_poll_budget);
526+
WRITE_ONCE(ep->prefer_busy_poll, epoll_params.prefer_busy_poll);
527+
return 0;
528+
case EPIOCGPARAMS:
529+
memset(&epoll_params, 0, sizeof(epoll_params));
530+
epoll_params.busy_poll_usecs = READ_ONCE(ep->busy_poll_usecs);
531+
epoll_params.busy_poll_budget = READ_ONCE(ep->busy_poll_budget);
532+
epoll_params.prefer_busy_poll = READ_ONCE(ep->prefer_busy_poll);
533+
if (copy_to_user(uarg, &epoll_params, sizeof(epoll_params)))
534+
return -EFAULT;
535+
return 0;
536+
default:
537+
return -ENOIOCTLCMD;
538+
}
539+
}
540+
458541
#else
459542

460543
static inline bool ep_busy_loop(struct eventpoll *ep, int nonblock)
@@ -466,6 +549,12 @@ static inline void ep_set_busy_poll_napi_id(struct epitem *epi)
466549
{
467550
}
468551

552+
static long ep_eventpoll_bp_ioctl(struct file *file, unsigned int cmd,
553+
unsigned long arg)
554+
{
555+
return -EOPNOTSUPP;
556+
}
557+
469558
#endif /* CONFIG_NET_RX_BUSY_POLL */
470559

471560
/*
@@ -825,6 +914,27 @@ static void ep_clear_and_put(struct eventpoll *ep)
825914
ep_free(ep);
826915
}
827916

917+
static long ep_eventpoll_ioctl(struct file *file, unsigned int cmd,
918+
unsigned long arg)
919+
{
920+
int ret;
921+
922+
if (!is_file_epoll(file))
923+
return -EINVAL;
924+
925+
switch (cmd) {
926+
case EPIOCSPARAMS:
927+
case EPIOCGPARAMS:
928+
ret = ep_eventpoll_bp_ioctl(file, cmd, arg);
929+
break;
930+
default:
931+
ret = -EINVAL;
932+
break;
933+
}
934+
935+
return ret;
936+
}
937+
828938
static int ep_eventpoll_release(struct inode *inode, struct file *file)
829939
{
830940
struct eventpoll *ep = file->private_data;
@@ -931,6 +1041,8 @@ static const struct file_operations eventpoll_fops = {
9311041
.release = ep_eventpoll_release,
9321042
.poll = ep_eventpoll_poll,
9331043
.llseek = noop_llseek,
1044+
.unlocked_ioctl = ep_eventpoll_ioctl,
1045+
.compat_ioctl = compat_ptr_ioctl,
9341046
};
9351047

9361048
/*
@@ -2058,6 +2170,11 @@ static int do_epoll_create(int flags)
20582170
error = PTR_ERR(file);
20592171
goto out_free_fd;
20602172
}
2173+
#ifdef CONFIG_NET_RX_BUSY_POLL
2174+
ep->busy_poll_usecs = 0;
2175+
ep->busy_poll_budget = 0;
2176+
ep->prefer_busy_poll = false;
2177+
#endif
20612178
ep->file = file;
20622179
fd_install(fd, file);
20632180
return fd;

include/uapi/linux/eventpoll.h

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -85,4 +85,17 @@ struct epoll_event {
8585
__u64 data;
8686
} EPOLL_PACKED;
8787

88+
struct epoll_params {
89+
__u32 busy_poll_usecs;
90+
__u16 busy_poll_budget;
91+
__u8 prefer_busy_poll;
92+
93+
/* pad the struct to a multiple of 64bits */
94+
__u8 __pad;
95+
};
96+
97+
#define EPOLL_IOC_TYPE 0x8A
98+
#define EPIOCSPARAMS _IOW(EPOLL_IOC_TYPE, 0x01, struct epoll_params)
99+
#define EPIOCGPARAMS _IOR(EPOLL_IOC_TYPE, 0x02, struct epoll_params)
100+
88101
#endif /* _UAPI_LINUX_EVENTPOLL_H */

0 commit comments

Comments
 (0)