Congestion Control Enhancements for QUIC in NGINX

by

in

Imagine a busy highway during rush hour, where too many cars are trying to squeeze through limited lanes. This is similar to what happens in computer networks when too much data is sent simultaneously. Just like traffic jams, network congestion can significantly slow down data transmission. This is where congestion control comes into play.

Congestion control is a crucial mechanism that helps prevent network congestion, ensuring that data flows smoothly and efficiently. Each computer network has limited bandwidth. If too much data is sent at the same time, it may cause loss of data or slowdowns due to spurious retransmits. To avoid network congestions, endpoints need to know how much data can be sent over the network at any given moment.

To keep up with the latest advancements in web performance, NGINX has introduced a new congestion control algorithm called CUBIC in its latest release. Additionally, this release includes several enhancements to our QUIC implementation, which significantly increase download speeds and fix bugs.

How Does Congestion Control Work?

Congestion control is at the heart of both the Transmission Control Protocol (TCP) and the QUIC protocol. These two protocols typically require a system of acknowledgements. The receiver of the packet sends an acknowledgement to indicate that the first packet was successfully delivered. Until the acknowledgement is received, the original packet is considered “in-flight”.

Congestion control algorithms operate by limiting the amount of in-flight data at any given moment. This limit is known as the Congestion Window (CWND). If a peer wants to send data, they may need to wait until previous packets are acknowledged and the in-flight counter gets reduced to below the congestion window.

The peer congestion window starts out small. Upon successful receival of acknowledgements from the peer, the window size increases, allowing more in-flight packets to be sent. However, if the congestion window grows beyond network capacity, packets may get lost, causing the congestion control algorithm to reduce the CWND size.

Past Congestion Control Solutions

One of the earliest congestion control algorithms is the Reno algorithm, which has been a staple in BSD operating systems since the 1980s. Over the years, it has undergone various improvements, with the final version NewReno defined in RFC 6585. NewReno is recognized as a standard congestion control algorithm for QUIC, as outlined in RFC 9002. It was also implemented in NGINX since our QUIC rollout from version 1.25.0 to 1.27.4.

The Reno process begins with a connection in Slow Start mode. In this mode, the congestion window exponentially grows from an initially small value. When a packet loss is detected, the window is reduced by half and the connection switches to Congestion Avoidance mode. In this mode, the congestion window grows linearly until the next packet loss, at which point the window is halved again and the cycle repeats.

In Figure 1, the Slow Start mode is indicated in green and Reno Congestion Avoidance is shown in red.

Figure 1: Reno congestion control graph

Why CUBIC Congestion Control?

One of the main drawbacks of the Reno congestion control algorithm is its inability to fully utilize fast networks. Reno uses a linear function to increase the congestion window, which can be too slow for high-speed networks. This problem is where the CUBIC congestion control algorithm comes in.

CUBIC, as described in RFC 9438, uses a cubic function instead of a linear function during Congestion Avoidance phase. This approach helps manage the congestion window more dynamically. When a packet loss is expected, the cubic function slows down the congestion window increase. When a packet loss is less likely, it speeds it up.

At the start of a Congestion Avoidance iteration, the congestion window has just been reduced, making packet loss unlikely. During this period, the cubic function enables the window to grow faster. As the window approaches the previous packet loss point, the cubic function slows down to avoid another loss. If no loss occurs, indicating that the previous loss was accidental or network bandwidth increased since then, the window growth accelerates again. This dynamic adjustment allows you to quickly utilize fast network bandwidth.

Most modern operating systems such as Linux, FreeBSD 14.0+, MacOS, and Windows are already using CUBIC as the default congestion control algorithm.

In Figure 2, the Slow Start mode is indicated in green and CUBIC Congestion Avoidance is shown in blue.

Figure 2: CUBIC congestion control graph

Comparing Congestion Control Benchmarks in NGINX

Experimental QUIC support was first added to NGINX version 1.25.0, using the NewReno congestion algorithm, as specified in RFC 9002. In NGINX 1.27.5, CUBIC congestion control was added to QUIC. Along with this change, the QUIC implementation received several significant improvements, which reduced congestion algorithm aggressiveness, improved packet reorder toleration, and fixed RFC compliance.

This update resulted in significant performance improvements, as shown in the benchmarks below. These benchmarks compare NGINX 1.27.4 and NGINX 1.27.5, using MTU 1500 on the interface.

# ifconfig lo mtu 1500

Network conditions are emulated using tc-netem. Below is an example for a 6000 packet queue and 50ms delay. This configuration translates to 100ms RTT, 9M Bandwidth Delay Product (BDP) and 720Mbps bandwidth.

# tc qdisc add dev lo root netem limit 6000 delay 50ms

To test QUIC performance, we use the HTTP/3 client gtlsclient:

$ time gtlsclient -q --exit-on-first-stream-close example.com 8443 https://example.com:8443/500m

In the NGINX configuration, http3_stream_buffer_size is set to 50m. Below are the benchmark results.

Average download time (seconds)NGINX 1.27.4NGINX 1.27.5Improvement %
500M file / RTT 40ms / BDP 750K45.9934.9424%
500M file / RTT 100ms / BDP 9M33.958.9973%
100M file / RTT 200ms / BDP 1.5M24.0518.1424%

The benchmark results show that NGINX 1.27.5 provides a significant improvement in download speeds, especially in high BDP environments. This enhancement is due to the switch to CUBIC congestion control, as well as other improvements in the QUIC implementation.

Try It Out and Share Your Feedback

This change was introduced in the NGINX 1.27.5 mainline release and is scheduled to be included in the 1.28.0 stable release. We hope it proves beneficial for your configuration. If you have any feedback, suggestions, or requests for additional features in NGINX, feel free to share them through GitHub Discussions or GitHub Issues.