Skip to content

Commit 0f8782e

Browse files
nealcardwelldavem330
authored andcommitted
tcp_bbr: add BBR congestion control
This commit implements a new TCP congestion control algorithm: BBR (Bottleneck Bandwidth and RTT). A detailed description of BBR will be published in ACM Queue, Vol. 14 No. 5, September-October 2016, as "BBR: Congestion-Based Congestion Control". BBR has significantly increased throughput and reduced latency for connections on Google's internal backbone networks and google.com and YouTube Web servers. BBR requires only changes on the sender side, not in the network or the receiver side. Thus it can be incrementally deployed on today's Internet, or in datacenters. The Internet has predominantly used loss-based congestion control (largely Reno or CUBIC) since the 1980s, relying on packet loss as the signal to slow down. While this worked well for many years, loss-based congestion control is unfortunately out-dated in today's networks. On today's Internet, loss-based congestion control causes the infamous bufferbloat problem, often causing seconds of needless queuing delay, since it fills the bloated buffers in many last-mile links. On today's high-speed long-haul links using commodity switches with shallow buffers, loss-based congestion control has abysmal throughput because it over-reacts to losses caused by transient traffic bursts. In 1981 Kleinrock and Gale showed that the optimal operating point for a network maximizes delivered bandwidth while minimizing delay and loss, not only for single connections but for the network as a whole. Finding that optimal operating point has been elusive, since any single network measurement is ambiguous: network measurements are the result of both bandwidth and propagation delay, and those two cannot be measured simultaneously. While it is impossible to disambiguate any single bandwidth or RTT measurement, a connection's behavior over time tells a clearer story. BBR uses a measurement strategy designed to resolve this ambiguity. It combines these measurements with a robust servo loop using recent control systems advances to implement a distributed congestion control algorithm that reacts to actual congestion, not packet loss or transient queue delay, and is designed to converge with high probability to a point near the optimal operating point. In a nutshell, BBR creates an explicit model of the network pipe by sequentially probing the bottleneck bandwidth and RTT. On the arrival of each ACK, BBR derives the current delivery rate of the last round trip, and feeds it through a windowed max-filter to estimate the bottleneck bandwidth. Conversely it uses a windowed min-filter to estimate the round trip propagation delay. The max-filtered bandwidth and min-filtered RTT estimates form BBR's model of the network pipe. Using its model, BBR sets control parameters to govern sending behavior. The primary control is the pacing rate: BBR applies a gain multiplier to transmit faster or slower than the observed bottleneck bandwidth. The conventional congestion window (cwnd) is now the secondary control; the cwnd is set to a small multiple of the estimated BDP (bandwidth-delay product) in order to allow full utilization and bandwidth probing while bounding the potential amount of queue at the bottleneck. When a BBR connection starts, it enters STARTUP mode and applies a high gain to perform an exponential search to quickly probe the bottleneck bandwidth (doubling its sending rate each round trip, like slow start). However, instead of continuing until it fills up the buffer (i.e. a loss), or until delay or ACK spacing reaches some threshold (like Hystart), it uses its model of the pipe to estimate when that pipe is full: it estimates the pipe is full when it notices the estimated bandwidth has stopped growing. At that point it exits STARTUP and enters DRAIN mode, where it reduces its pacing rate to drain the queue it estimates it has created. Then BBR enters steady state. In steady state, PROBE_BW mode cycles between first pacing faster to probe for more bandwidth, then pacing slower to drain any queue that created if no more bandwidth was available, and then cruising at the estimated bandwidth to utilize the pipe without creating excess queue. Occasionally, on an as-needed basis, it sends significantly slower to probe for RTT (PROBE_RTT mode). BBR has been fully deployed on Google's wide-area backbone networks and we're experimenting with BBR on Google.com and YouTube on a global scale. Replacing CUBIC with BBR has resulted in significant improvements in network latency and application (RPC, browser, and video) metrics. For more details please refer to our upcoming ACM Queue publication. Example performance results, to illustrate the difference between BBR and CUBIC: Resilience to random loss (e.g. from shallow buffers): Consider a netperf TCP_STREAM test lasting 30 secs on an emulated path with a 10Gbps bottleneck, 100ms RTT, and 1% packet loss rate. CUBIC gets 3.27 Mbps, and BBR gets 9150 Mbps (2798x higher). Low latency with the bloated buffers common in today's last-mile links: Consider a netperf TCP_STREAM test lasting 120 secs on an emulated path with a 10Mbps bottleneck, 40ms RTT, and 1000-packet bottleneck buffer. Both fully utilize the bottleneck bandwidth, but BBR achieves this with a median RTT 25x lower (43 ms instead of 1.09 secs). Our long-term goal is to improve the congestion control algorithms used on the Internet. We are hopeful that BBR can help advance the efforts toward this goal, and motivate the community to do further research. Test results, performance evaluations, feedback, and BBR-related discussions are very welcome in the public e-mail list for BBR: https://groups.google.com/forum/#!forum/bbr-dev NOTE: BBR *must* be used with the fq qdisc ("man tc-fq") with pacing enabled, since pacing is integral to the BBR design and implementation. BBR without pacing would not function properly, and may incur unnecessary high packet loss rates. Signed-off-by: Van Jacobson <[email protected]> Signed-off-by: Neal Cardwell <[email protected]> Signed-off-by: Yuchung Cheng <[email protected]> Signed-off-by: Nandita Dukkipati <[email protected]> Signed-off-by: Eric Dumazet <[email protected]> Signed-off-by: Soheil Hassas Yeganeh <[email protected]> Signed-off-by: David S. Miller <[email protected]>
1 parent 7e74417 commit 0f8782e

File tree

4 files changed

+928
-0
lines changed

4 files changed

+928
-0
lines changed

include/uapi/linux/inet_diag.h

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -124,6 +124,7 @@ enum {
124124
INET_DIAG_PEERS,
125125
INET_DIAG_PAD,
126126
INET_DIAG_MARK,
127+
INET_DIAG_BBRINFO,
127128
__INET_DIAG_MAX,
128129
};
129130

@@ -157,8 +158,20 @@ struct tcp_dctcp_info {
157158
__u32 dctcp_ab_tot;
158159
};
159160

161+
/* INET_DIAG_BBRINFO */
162+
163+
struct tcp_bbr_info {
164+
/* u64 bw: max-filtered BW (app throughput) estimate in Byte per sec: */
165+
__u32 bbr_bw_lo; /* lower 32 bits of bw */
166+
__u32 bbr_bw_hi; /* upper 32 bits of bw */
167+
__u32 bbr_min_rtt; /* min-filtered RTT in uSec */
168+
__u32 bbr_pacing_gain; /* pacing gain shifted left 8 bits */
169+
__u32 bbr_cwnd_gain; /* cwnd gain shifted left 8 bits */
170+
};
171+
160172
union tcp_cc_info {
161173
struct tcpvegas_info vegas;
162174
struct tcp_dctcp_info dctcp;
175+
struct tcp_bbr_info bbr;
163176
};
164177
#endif /* _UAPI_INET_DIAG_H_ */

net/ipv4/Kconfig

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -640,6 +640,21 @@ config TCP_CONG_CDG
640640
D.A. Hayes and G. Armitage. "Revisiting TCP congestion control using
641641
delay gradients." In Networking 2011. Preprint: http://goo.gl/No3vdg
642642

643+
config TCP_CONG_BBR
644+
tristate "BBR TCP"
645+
default n
646+
---help---
647+
648+
BBR (Bottleneck Bandwidth and RTT) TCP congestion control aims to
649+
maximize network utilization and minimize queues. It builds an explicit
650+
model of the the bottleneck delivery rate and path round-trip
651+
propagation delay. It tolerates packet loss and delay unrelated to
652+
congestion. It can operate over LAN, WAN, cellular, wifi, or cable
653+
modem links. It can coexist with flows that use loss-based congestion
654+
control, and can operate with shallow buffers, deep buffers,
655+
bufferbloat, policers, or AQM schemes that do not provide a delay
656+
signal. It requires the fq ("Fair Queue") pacing packet scheduler.
657+
643658
choice
644659
prompt "Default TCP congestion control"
645660
default DEFAULT_CUBIC
@@ -674,6 +689,9 @@ choice
674689
config DEFAULT_CDG
675690
bool "CDG" if TCP_CONG_CDG=y
676691

692+
config DEFAULT_BBR
693+
bool "BBR" if TCP_CONG_BBR=y
694+
677695
config DEFAULT_RENO
678696
bool "Reno"
679697
endchoice

net/ipv4/Makefile

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -41,6 +41,7 @@ obj-$(CONFIG_INET_DIAG) += inet_diag.o
4141
obj-$(CONFIG_INET_TCP_DIAG) += tcp_diag.o
4242
obj-$(CONFIG_INET_UDP_DIAG) += udp_diag.o
4343
obj-$(CONFIG_NET_TCPPROBE) += tcp_probe.o
44+
obj-$(CONFIG_TCP_CONG_BBR) += tcp_bbr.o
4445
obj-$(CONFIG_TCP_CONG_BIC) += tcp_bic.o
4546
obj-$(CONFIG_TCP_CONG_CDG) += tcp_cdg.o
4647
obj-$(CONFIG_TCP_CONG_CUBIC) += tcp_cubic.o

0 commit comments

Comments
 (0)