Skip to content

Commit f9bfe4e

Browse files
Eric Dumazetdavem330
authored andcommitted
tcp: lack of available data can also cause TSO defer
tcp_tso_should_defer() can return true in three different cases : 1) We are cwnd-limited 2) We are rwnd-limited 3) We are application limited. Neal pointed out that my recent fix went too far, since it assumed that if we were not in 1) case, we must be rwnd-limited Fix this by properly populating the is_cwnd_limited and is_rwnd_limited booleans. After this change, we can finally move the silly check for FIN flag only for the application-limited case. The same move for EOR bit will be handled in net-next, since commit 1c09f7d ("tcp: do not try to defer skbs with eor mark (MSG_EOR)") is scheduled for linux-4.21 Tested by running 200 concurrent netperf -t TCP_RR -- -r 60000,100 and checking none of them was rwnd_limited in the chrono_stat output from "ss -ti" command. Fixes: 4172754 ("tcp: Do not underestimate rwnd_limited") Signed-off-by: Eric Dumazet <[email protected]> Suggested-by: Neal Cardwell <[email protected]> Reviewed-by: Neal Cardwell <[email protected]> Acked-by: Soheil Hassas Yeganeh <[email protected]> Reviewed-by: Yuchung Cheng <[email protected]> Signed-off-by: David S. Miller <[email protected]>
1 parent 1b4e5ad commit f9bfe4e

File tree

1 file changed

+24
-11
lines changed

1 file changed

+24
-11
lines changed

net/ipv4/tcp_output.c

Lines changed: 24 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1904,17 +1904,16 @@ static int tso_fragment(struct sock *sk, enum tcp_queue tcp_queue,
19041904
* This algorithm is from John Heffner.
19051905
*/
19061906
static bool tcp_tso_should_defer(struct sock *sk, struct sk_buff *skb,
1907-
bool *is_cwnd_limited, u32 max_segs)
1907+
bool *is_cwnd_limited,
1908+
bool *is_rwnd_limited,
1909+
u32 max_segs)
19081910
{
19091911
const struct inet_connection_sock *icsk = inet_csk(sk);
19101912
u32 age, send_win, cong_win, limit, in_flight;
19111913
struct tcp_sock *tp = tcp_sk(sk);
19121914
struct sk_buff *head;
19131915
int win_divisor;
19141916

1915-
if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN)
1916-
goto send_now;
1917-
19181917
if (icsk->icsk_ca_state >= TCP_CA_Recovery)
19191918
goto send_now;
19201919

@@ -1973,10 +1972,27 @@ static bool tcp_tso_should_defer(struct sock *sk, struct sk_buff *skb,
19731972
if (age < (tp->srtt_us >> 4))
19741973
goto send_now;
19751974

1976-
/* Ok, it looks like it is advisable to defer. */
1975+
/* Ok, it looks like it is advisable to defer.
1976+
* Three cases are tracked :
1977+
* 1) We are cwnd-limited
1978+
* 2) We are rwnd-limited
1979+
* 3) We are application limited.
1980+
*/
1981+
if (cong_win < send_win) {
1982+
if (cong_win <= skb->len) {
1983+
*is_cwnd_limited = true;
1984+
return true;
1985+
}
1986+
} else {
1987+
if (send_win <= skb->len) {
1988+
*is_rwnd_limited = true;
1989+
return true;
1990+
}
1991+
}
19771992

1978-
if (cong_win < send_win && cong_win <= skb->len)
1979-
*is_cwnd_limited = true;
1993+
/* If this packet won't get more data, do not wait. */
1994+
if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN)
1995+
goto send_now;
19801996

19811997
return true;
19821998

@@ -2356,11 +2372,8 @@ static bool tcp_write_xmit(struct sock *sk, unsigned int mss_now, int nonagle,
23562372
} else {
23572373
if (!push_one &&
23582374
tcp_tso_should_defer(sk, skb, &is_cwnd_limited,
2359-
max_segs)) {
2360-
if (!is_cwnd_limited)
2361-
is_rwnd_limited = true;
2375+
&is_rwnd_limited, max_segs))
23622376
break;
2363-
}
23642377
}
23652378

23662379
limit = mss_now;

0 commit comments

Comments
 (0)