Skip to content

Commit 2f715c1

Browse files
yuchungchengdavem330
authored andcommitted
tcp: do not rearm RTO when future data are sacked
Patch ed08495 "tcp: use RTT from SACK for RTO" always re-arms RTO upon obtaining a RTT sample from newly sacked data. But technically RTO should only be re-armed when the data sent before the last (re)transmission of write queue head are (s)acked. Otherwise the RTO may continue to extend during loss recovery on data sent in the future. Note that RTTs from ACK or timestamps do not have this problem, as the RTT source must be from data sent before. The new RTO re-arm policy is 1) Always re-arm RTO if SND.UNA is advanced 2) Re-arm RTO if sack RTT is available, provided the sacked data was sent before the last time write_queue_head was sent. Signed-off-by: Larry Brakmo <[email protected]> Signed-off-by: Yuchung Cheng <[email protected]> Acked-by: Neal Cardwell <[email protected]> Acked-by: Eric Dumazet <[email protected]> Signed-off-by: David S. Miller <[email protected]>
1 parent 2909d87 commit 2f715c1

File tree

1 file changed

+10
-3
lines changed

1 file changed

+10
-3
lines changed

net/ipv4/tcp_input.c

Lines changed: 10 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2987,6 +2987,7 @@ static int tcp_clean_rtx_queue(struct sock *sk, int prior_fackets,
29872987
s32 seq_rtt = -1;
29882988
s32 ca_seq_rtt = -1;
29892989
ktime_t last_ackt = net_invalid_timestamp();
2990+
bool rtt_update;
29902991

29912992
while ((skb = tcp_write_queue_head(sk)) && skb != tcp_send_head(sk)) {
29922993
struct tcp_skb_cb *scb = TCP_SKB_CB(skb);
@@ -3063,14 +3064,13 @@ static int tcp_clean_rtx_queue(struct sock *sk, int prior_fackets,
30633064
if (skb && (TCP_SKB_CB(skb)->sacked & TCPCB_SACKED_ACKED))
30643065
flag |= FLAG_SACK_RENEGING;
30653066

3066-
if (tcp_ack_update_rtt(sk, flag, seq_rtt, sack_rtt) ||
3067-
(flag & FLAG_ACKED))
3068-
tcp_rearm_rto(sk);
3067+
rtt_update = tcp_ack_update_rtt(sk, flag, seq_rtt, sack_rtt);
30693068

30703069
if (flag & FLAG_ACKED) {
30713070
const struct tcp_congestion_ops *ca_ops
30723071
= inet_csk(sk)->icsk_ca_ops;
30733072

3073+
tcp_rearm_rto(sk);
30743074
if (unlikely(icsk->icsk_mtup.probe_size &&
30753075
!after(tp->mtu_probe.probe_seq_end, tp->snd_una))) {
30763076
tcp_mtup_probe_success(sk);
@@ -3109,6 +3109,13 @@ static int tcp_clean_rtx_queue(struct sock *sk, int prior_fackets,
31093109

31103110
ca_ops->pkts_acked(sk, pkts_acked, rtt_us);
31113111
}
3112+
} else if (skb && rtt_update && sack_rtt >= 0 &&
3113+
sack_rtt > (s32)(now - TCP_SKB_CB(skb)->when)) {
3114+
/* Do not re-arm RTO if the sack RTT is measured from data sent
3115+
* after when the head was last (re)transmitted. Otherwise the
3116+
* timeout may continue to extend in loss recovery.
3117+
*/
3118+
tcp_rearm_rto(sk);
31123119
}
31133120

31143121
#if FASTRETRANS_DEBUG > 0

0 commit comments

Comments
 (0)