Skip to content

Commit 832d11c

Browse files
ij1davem330
authored andcommitted
tcp: Try to restore large SKBs while SACK processing
During SACK processing, most of the benefits of TSO are eaten by the SACK blocks that one-by-one fragment SKBs to MSS sized chunks. Then we're in problems when cleanup work for them has to be done when a large cumulative ACK comes. Try to return back to pre-split state already while more and more SACK info gets discovered by combining newly discovered SACK areas with the previous skb if that's SACKed as well. This approach has a number of benefits: 1) The processing overhead is spread more equally over the RTT 2) Write queue has less skbs to process (affect everything which has to walk in the queue past the sacked areas) 3) Write queue is consistent whole the time, so no other parts of TCP has to be aware of this (this was not the case with some other approach that was, well, quite intrusive all around). 4) Clean_rtx_queue can release most of the pages using single put_page instead of previous PAGE_SIZE/mss+1 calls In case a hole is fully filled by the new SACK block, we attempt to combine the next skb too which allows construction of skbs that are even larger than what tso split them to and it handles hole per on every nth patterns that often occur during slow start overshoot pretty nicely. Though this to be really useful also a retransmission would have to get lost since cumulative ACKs advance one hole at a time in the most typical case. TODO: handle upwards only merging. That should be rather easy when segment is fully sacked but I'm leaving that as future work item (it won't make very large difference anyway since this current approach already covers quite a lot of normal cases). I was earlier thinking of some sophisticated way of tracking timestamps of the first and the last segment but later on realized that it won't be that necessary at all to store the timestamp of the last segment. The cases that can occur are basically either: 1) ambiguous => no sensible measurement can be taken anyway 2) non-ambiguous is due to reordering => having the timestamp of the last segment there is just skewing things more off than does some good since the ack got triggered by one of the holes (besides some substle issues that would make determining right hole/skb even harder problem). Anyway, it has nothing to do with this change then. I choose to route some abnormal looking cases with goto noop, some could be handled differently (eg., by stopping the walking at that skb but again). In general, they either shouldn't happen at all or are rare enough to make no difference in practice. In theory this change (as whole) could cause some macroscale regression (global) because of cache misses that are taken over the round-trip time but it gets very likely better because of much less (local) cache misses per other write queue walkers and the big recovery clearing cumulative ack. Worth to note that these benefits would be very easy to get also without TSO/GSO being on as long as the data is in pages so that we can merge them. Currently I won't let that happen because DSACK splitting at fragment that would mess up pcounts due to sk_can_gso in tcp_set_skb_tso_segs. Once DSACKs fragments gets avoided, we have some conditions that can be made less strict. TODO: I will probably have to convert the excessive pointer passing to struct sacktag_state... :-) My testing revealed that considerable amount of skbs couldn't be shifted because they were cloned (most likely still awaiting tx reclaim)... [The rest is considering future work instead since I got repeatably EFAULT to tcpdump's recvfrom when I added pskb_expand_head to deal with clones, so I separated that into another, later patch] ...To counter that, I gave up on the fifth advantage: 5) When growing previous SACK block, less allocs for new skbs are done, basically a new alloc is needed only when new hole is detected and when the previous skb runs out of frags space ...which now only happens of if reclaim is fast enough to dispose the clone before the SACK block comes in (the window is RTT long), otherwise we'll have to alloc some. With clones being handled I got these numbers (will be somewhat worse without that), taken with fine-grained mibs: TCPSackShifted 398 TCPSackMerged 877 TCPSackShiftFallback 320 TCPSACKCOLLAPSEFALLBACKGSO 0 TCPSACKCOLLAPSEFALLBACKSKBBITS 0 TCPSACKCOLLAPSEFALLBACKSKBDATA 0 TCPSACKCOLLAPSEFALLBACKBELOW 0 TCPSACKCOLLAPSEFALLBACKFIRST 1 TCPSACKCOLLAPSEFALLBACKPREVBITS 318 TCPSACKCOLLAPSEFALLBACKMSS 1 TCPSACKCOLLAPSEFALLBACKNOHEAD 0 TCPSACKCOLLAPSEFALLBACKSHIFT 0 TCPSACKCOLLAPSENOOPSEQ 0 TCPSACKCOLLAPSENOOPSMALLPCOUNT 0 TCPSACKCOLLAPSENOOPSMALLLEN 0 TCPSACKCOLLAPSEHOLE 12 Signed-off-by: Ilpo Järvinen <[email protected]> Signed-off-by: David S. Miller <[email protected]>
1 parent f58b22f commit 832d11c

File tree

4 files changed

+427
-7
lines changed

4 files changed

+427
-7
lines changed

include/linux/skbuff.h

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -492,6 +492,19 @@ static inline bool skb_queue_is_last(const struct sk_buff_head *list,
492492
return (skb->next == (struct sk_buff *) list);
493493
}
494494

495+
/**
496+
* skb_queue_is_first - check if skb is the first entry in the queue
497+
* @list: queue head
498+
* @skb: buffer
499+
*
500+
* Returns true if @skb is the first buffer on the list.
501+
*/
502+
static inline bool skb_queue_is_first(const struct sk_buff_head *list,
503+
const struct sk_buff *skb)
504+
{
505+
return (skb->prev == (struct sk_buff *) list);
506+
}
507+
495508
/**
496509
* skb_queue_next - return the next packet in the queue
497510
* @list: queue head
@@ -510,6 +523,24 @@ static inline struct sk_buff *skb_queue_next(const struct sk_buff_head *list,
510523
return skb->next;
511524
}
512525

526+
/**
527+
* skb_queue_prev - return the prev packet in the queue
528+
* @list: queue head
529+
* @skb: current buffer
530+
*
531+
* Return the prev packet in @list before @skb. It is only valid to
532+
* call this if skb_queue_is_first() evaluates to false.
533+
*/
534+
static inline struct sk_buff *skb_queue_prev(const struct sk_buff_head *list,
535+
const struct sk_buff *skb)
536+
{
537+
/* This BUG_ON may seem severe, but if we just return then we
538+
* are going to dereference garbage.
539+
*/
540+
BUG_ON(skb_queue_is_first(list, skb));
541+
return skb->prev;
542+
}
543+
513544
/**
514545
* skb_get - reference buffer
515546
* @skb: buffer to reference
@@ -1652,6 +1683,8 @@ extern int skb_splice_bits(struct sk_buff *skb,
16521683
extern void skb_copy_and_csum_dev(const struct sk_buff *skb, u8 *to);
16531684
extern void skb_split(struct sk_buff *skb,
16541685
struct sk_buff *skb1, const u32 len);
1686+
extern int skb_shift(struct sk_buff *tgt, struct sk_buff *skb,
1687+
int shiftlen);
16551688

16561689
extern struct sk_buff *skb_segment(struct sk_buff *skb, int features);
16571690

include/net/tcp.h

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1192,6 +1192,11 @@ static inline struct sk_buff *tcp_write_queue_next(struct sock *sk, struct sk_bu
11921192
return skb_queue_next(&sk->sk_write_queue, skb);
11931193
}
11941194

1195+
static inline struct sk_buff *tcp_write_queue_prev(struct sock *sk, struct sk_buff *skb)
1196+
{
1197+
return skb_queue_prev(&sk->sk_write_queue, skb);
1198+
}
1199+
11951200
#define tcp_for_write_queue(skb, sk) \
11961201
skb_queue_walk(&(sk)->sk_write_queue, skb)
11971202

net/core/skbuff.c

Lines changed: 140 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2018,6 +2018,146 @@ void skb_split(struct sk_buff *skb, struct sk_buff *skb1, const u32 len)
20182018
skb_split_no_header(skb, skb1, len, pos);
20192019
}
20202020

2021+
/* Shifting from/to a cloned skb is a no-go.
2022+
*
2023+
* TODO: handle cloned skbs by using pskb_expand_head()
2024+
*/
2025+
static int skb_prepare_for_shift(struct sk_buff *skb)
2026+
{
2027+
return skb_cloned(skb);
2028+
}
2029+
2030+
/**
2031+
* skb_shift - Shifts paged data partially from skb to another
2032+
* @tgt: buffer into which tail data gets added
2033+
* @skb: buffer from which the paged data comes from
2034+
* @shiftlen: shift up to this many bytes
2035+
*
2036+
* Attempts to shift up to shiftlen worth of bytes, which may be less than
2037+
* the length of the skb, from tgt to skb. Returns number bytes shifted.
2038+
* It's up to caller to free skb if everything was shifted.
2039+
*
2040+
* If @tgt runs out of frags, the whole operation is aborted.
2041+
*
2042+
* Skb cannot include anything else but paged data while tgt is allowed
2043+
* to have non-paged data as well.
2044+
*
2045+
* TODO: full sized shift could be optimized but that would need
2046+
* specialized skb free'er to handle frags without up-to-date nr_frags.
2047+
*/
2048+
int skb_shift(struct sk_buff *tgt, struct sk_buff *skb, int shiftlen)
2049+
{
2050+
int from, to, merge, todo;
2051+
struct skb_frag_struct *fragfrom, *fragto;
2052+
2053+
BUG_ON(shiftlen > skb->len);
2054+
BUG_ON(skb_headlen(skb)); /* Would corrupt stream */
2055+
2056+
todo = shiftlen;
2057+
from = 0;
2058+
to = skb_shinfo(tgt)->nr_frags;
2059+
fragfrom = &skb_shinfo(skb)->frags[from];
2060+
2061+
/* Actual merge is delayed until the point when we know we can
2062+
* commit all, so that we don't have to undo partial changes
2063+
*/
2064+
if (!to ||
2065+
!skb_can_coalesce(tgt, to, fragfrom->page, fragfrom->page_offset)) {
2066+
merge = -1;
2067+
} else {
2068+
merge = to - 1;
2069+
2070+
todo -= fragfrom->size;
2071+
if (todo < 0) {
2072+
if (skb_prepare_for_shift(skb) ||
2073+
skb_prepare_for_shift(tgt))
2074+
return 0;
2075+
2076+
fragto = &skb_shinfo(tgt)->frags[merge];
2077+
2078+
fragto->size += shiftlen;
2079+
fragfrom->size -= shiftlen;
2080+
fragfrom->page_offset += shiftlen;
2081+
2082+
goto onlymerged;
2083+
}
2084+
2085+
from++;
2086+
}
2087+
2088+
/* Skip full, not-fitting skb to avoid expensive operations */
2089+
if ((shiftlen == skb->len) &&
2090+
(skb_shinfo(skb)->nr_frags - from) > (MAX_SKB_FRAGS - to))
2091+
return 0;
2092+
2093+
if (skb_prepare_for_shift(skb) || skb_prepare_for_shift(tgt))
2094+
return 0;
2095+
2096+
while ((todo > 0) && (from < skb_shinfo(skb)->nr_frags)) {
2097+
if (to == MAX_SKB_FRAGS)
2098+
return 0;
2099+
2100+
fragfrom = &skb_shinfo(skb)->frags[from];
2101+
fragto = &skb_shinfo(tgt)->frags[to];
2102+
2103+
if (todo >= fragfrom->size) {
2104+
*fragto = *fragfrom;
2105+
todo -= fragfrom->size;
2106+
from++;
2107+
to++;
2108+
2109+
} else {
2110+
get_page(fragfrom->page);
2111+
fragto->page = fragfrom->page;
2112+
fragto->page_offset = fragfrom->page_offset;
2113+
fragto->size = todo;
2114+
2115+
fragfrom->page_offset += todo;
2116+
fragfrom->size -= todo;
2117+
todo = 0;
2118+
2119+
to++;
2120+
break;
2121+
}
2122+
}
2123+
2124+
/* Ready to "commit" this state change to tgt */
2125+
skb_shinfo(tgt)->nr_frags = to;
2126+
2127+
if (merge >= 0) {
2128+
fragfrom = &skb_shinfo(skb)->frags[0];
2129+
fragto = &skb_shinfo(tgt)->frags[merge];
2130+
2131+
fragto->size += fragfrom->size;
2132+
put_page(fragfrom->page);
2133+
}
2134+
2135+
/* Reposition in the original skb */
2136+
to = 0;
2137+
while (from < skb_shinfo(skb)->nr_frags)
2138+
skb_shinfo(skb)->frags[to++] = skb_shinfo(skb)->frags[from++];
2139+
skb_shinfo(skb)->nr_frags = to;
2140+
2141+
BUG_ON(todo > 0 && !skb_shinfo(skb)->nr_frags);
2142+
2143+
onlymerged:
2144+
/* Most likely the tgt won't ever need its checksum anymore, skb on
2145+
* the other hand might need it if it needs to be resent
2146+
*/
2147+
tgt->ip_summed = CHECKSUM_PARTIAL;
2148+
skb->ip_summed = CHECKSUM_PARTIAL;
2149+
2150+
/* Yak, is it really working this way? Some helper please? */
2151+
skb->len -= shiftlen;
2152+
skb->data_len -= shiftlen;
2153+
skb->truesize -= shiftlen;
2154+
tgt->len += shiftlen;
2155+
tgt->data_len += shiftlen;
2156+
tgt->truesize += shiftlen;
2157+
2158+
return shiftlen;
2159+
}
2160+
20212161
/**
20222162
* skb_prepare_seq_read - Prepare a sequential read of skb data
20232163
* @skb: the buffer to read

0 commit comments

Comments
 (0)