Skip to content

Commit 5413d1b

Browse files
edumazetdavem330
authored andcommitted
net: do not block BH while processing socket backlog
Socket backlog processing is a major latency source. With current TCP socket sk_rcvbuf limits, I have sampled __release_sock() holding cpu for more than 5 ms, and packets being dropped by the NIC once ring buffer is filled. All users are now ready to be called from process context, we can unblock BH and let interrupts be serviced faster. cond_resched_softirq() could be removed, as it has no more user. Signed-off-by: Eric Dumazet <[email protected]> Acked-by: Soheil Hassas Yeganeh <[email protected]> Acked-by: Alexei Starovoitov <[email protected]> Signed-off-by: David S. Miller <[email protected]>
1 parent 860fbbc commit 5413d1b

File tree

1 file changed

+8
-14
lines changed

1 file changed

+8
-14
lines changed

net/core/sock.c

Lines changed: 8 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -2019,33 +2019,27 @@ static void __release_sock(struct sock *sk)
20192019
__releases(&sk->sk_lock.slock)
20202020
__acquires(&sk->sk_lock.slock)
20212021
{
2022-
struct sk_buff *skb = sk->sk_backlog.head;
2022+
struct sk_buff *skb, *next;
20232023

2024-
do {
2024+
while ((skb = sk->sk_backlog.head) != NULL) {
20252025
sk->sk_backlog.head = sk->sk_backlog.tail = NULL;
2026-
bh_unlock_sock(sk);
20272026

2028-
do {
2029-
struct sk_buff *next = skb->next;
2027+
spin_unlock_bh(&sk->sk_lock.slock);
20302028

2029+
do {
2030+
next = skb->next;
20312031
prefetch(next);
20322032
WARN_ON_ONCE(skb_dst_is_noref(skb));
20332033
skb->next = NULL;
20342034
sk_backlog_rcv(sk, skb);
20352035

2036-
/*
2037-
* We are in process context here with softirqs
2038-
* disabled, use cond_resched_softirq() to preempt.
2039-
* This is safe to do because we've taken the backlog
2040-
* queue private:
2041-
*/
2042-
cond_resched_softirq();
2036+
cond_resched();
20432037

20442038
skb = next;
20452039
} while (skb != NULL);
20462040

2047-
bh_lock_sock(sk);
2048-
} while ((skb = sk->sk_backlog.head) != NULL);
2041+
spin_lock_bh(&sk->sk_lock.slock);
2042+
}
20492043

20502044
/*
20512045
* Doing the zeroing here guarantee we can not loop forever

0 commit comments

Comments
 (0)