• United States



6 ways HTTP/3 benefits security (and 7 serious concerns)

Jun 29, 202017 mins

HTTP/3 brings improved performance and reliability, along with various security and privacy benefits, but there are some noteworthy challenges.

A macro shot at the pixel level of a browser displays 'https' and a glowing lock in the address bar.
Credit: RobertAx / Getty Images

HTTP3, the third official version of hypertext transfer protocol (HTTP), will not use the transmission control protocol (TCP) as did its predecessors. Instead, it uses the quick UDP internet connections (QUIC) protocol developed by Google in 2012.

QUIC is a transport layer protocol based on a multiplexed version of user datagram protocol (UDP) connections. Unlike TCP, UDP does not follow the TCP three-way handshake, but uses a single UDP roundtrip.  Thus, the QUIC protocol exponentially improves any web component’s network performances as it uses UDP for every connection between the user-agent and the web server. Also, QUIC relies on multiplexing to manage multiple interactions between the user-agent and server seamlessly over a single connection, without any one blocking another, thus helping with performance improvements compared to its predecessors.

With several benefits from the performance and reliability perspective, HTTP/3 is considered the right way to go. From the security and privacy perspective, both benefits and limitations exist, with most being extensively detailed in the research arena. This article provides details on the benefits provided by HTTP/3 and some security considerations that must be taken into account.

Security features and benefits

End-to-end encryption The TCP protocol was designed to ensure that the payload encryption was present during the transmission, but the transport-specific information was still unencrypted, raising many security and privacy issues. The countermeasures designed and implemented to prevent these attacks are not on the TCP stack but on the network appliances and the middleboxes that handle the protocol and network. Additionally, the parsers built to overcome these issues in load balancers and other network appliances have serious performance issues and may limit the future network expansions that are rapidly evolving and depend upon network speed and reliability.

With QUIC protocol, only required fields in the network segment are unencrypted, while the rest of the information is encrypted by default. By reviewing the network segment of TCP and QUIC, we find that the fields including packet flags (Packet NR and ACK NR), window, and options are encrypted in QUIC, but are not in TCP. The proposed encryption in QUIC helps prevent pervasive monitoring attacks, which were prevalent in HTTP/3’s predecessors, as well as intrusive information gathering of protocol artifacts and metadata, and application data.

Figure 1 below shows how the QUIC protocol looks in the network analyzer tool, Wireshark. As per the QUIC’s network segment, the internet protocol (IP) layer holds the source and the destination IP address information. UDP holds the source and destination port, while the QUIC contains the Public Flags, Packet Number, Connection ID, and the encrypted Payload.

Wireshark snippet showing QUIC protocol’s network segments Sandeep Jayashankar and Subin Thayyile Kandy

Figure 1: Wireshark snippet showing QUIC protocol’s network segments

TLS secure connectivity To support end-to-end encryption during connections, QUIC relies heavily on cryptographic and transport layer handshakes. Since QUIC directly interacts with the TLS 1.3, it has mandated the encryption used for all originating connections, and there is no option to disable TLS. QUIC also takes the responsibility of ensuring the secure connections are established while taking into consideration the confidentiality and integrity protections to all originating connections. Unlike HTTP/2 + TLS implementation, QUIC handles the TLS handshake and alerting mechanisms on its transport context, which in turn helps QUIC to establish cryptographic protections, using the keys exchanged from the handshake.

If we consider the protocol as a whole, there are two primary communications between the TLS and QUIC:

  1. QUIC provides a reliable stream abstraction for the TLS to send and receive messages through the QUIC.
  2. TLS updates the QUIC component with the following.
    1. Secret, authenticated encryption algorithms, and a key derivation function (KDF), 
    2. packet protection keys,
    3. protocol state changes (such as handshake statuses, server certificates. 

Unlike HTTP/2, which uses the TLS’s “application_data” records, QUIC uses STREAM frames in the form of QUIC packets. The TLS handshakes happen in the form of CRYPTO frames, which mainly consists of handshake data in a continuous stream. QUIC is designed to send packets in parallel, sometimes bundling different messages into one and encrypting them, considering that those messages have the same encryption level. This feature provides excellent benefits to network performance while ensuring proper encryption modes are applied during transmission.

Full forward secrecy Perfect forward secrecy (PFS) in a protocol can be achieved when temporary private keys are exchanged between the user-agents and servers. Every session initiated by the user-agent uses a new unique session key, and it does not have any relationship with the previous session key. By using a separate session key for each transaction, any information from earlier or future sessions cannot be compromised even if any session key is compromised. From a cryptographic perspective, no key exchange can provide PFS. However, a new term, full forward secrecy, gives a realistic expectation of PFS.

QUIC uses TLS 1.3, which supports both pre-shared key (PSK) and Diffie-Hellman (DH) over elliptic curves (EC) DHE key exchanges or finite fields. The 0-RTT key exchanges provide full forward secrecy, as the cryptographic specification only accepts forward secure connections via 0-RTT handshakes. While TLS 1.2 also supports forward secrecy, the forward secrecy is technically lost during session resumption when the user-agent sends a copy of secret material protected by a symmetric key known only to the server. The protocol provides full forward secrecy even to the initial messages between the user-agent and the server. Also, since QUIC protocol does not support long-term secret keys, QUIC, with the help of TLS 1.3, can provide a full forward secrecy feature to the applications using its protocol layer. 

Replay attack protection The QUIC implementation is designed to store the client values of the key derivation, in addition to nonces. Any repeated requests with the same key derivation value and the nonces are identified and discarded by the server. This design is known to be a performance nightmare, considering the protocol traffic overhead between the user-agent and the server. Theoretically, the solution may seem applicable, but in practice, the protocol may get bulkier and lead to performance issues. The current design is not the best, but this stops any server from accepting the same key more than once, from the protocol level. Also, QUIC does not provide replay protection during the initial steps and starts the protection right after the server’s initial reply. QUIC’s design is to let the initial transaction be protected by the applications and save on protocol overhead. Considering that the web components may use a derived key from the session key, replay attacks can arise at this stage; however, precautionary measures can be used on the application level to mitigate this.

IP spoofing protection QUIC supports address validation during the handshake and requires signed proof of addresses, thereby eliminating any IP spoofing attacks. The IP address spoofing issue is mainly handled in QUIC by extensively utilizing the “source-address token,” which is a server’s authenticated-encryption block containing the user-agent’s IP address and the server’s timestamp. The user-agents can reuse the source address token generated by the servers until their IP addresses have not moved due to connectivity changes. Since the source address tokens are used as bearer tokens, they can be reused, and any IP address restrictions set by the server can be bypassed. Since the server only responds to the IP address in the token, even a stolen cookie or the token may not result in successful IP spoofing. Also, considering that QUIC supports short-lived source address tokens, the time window of a successful IP spoofing attack will be practically almost impossible.

SSL downgrade prevention By design, TLS 1.3 protects against TLS downgrade attacks as the protocol mandates a key hash of all handshake communications and requires the handshake receiver to verify the sent key hash. During the handshake, any detected tampering attempts on the client capabilities will result in the handshake termination with an error. Additionally, the CertificateVerify messages between the user-agent and the server include a PKCS RSA hash signature of all previous messages about the specific connection. This checksum implementation in QUIC will prevent a successful TLS downgrade attack.

Security impacts of HTTP/3

0-RTT resumption vulnerabilities One of the most advantageous features of HTTP/3 is the 0-RTT resumption, which drastically improves connectivity speed and reduces latency. However, this process only works when there has been a previous connection established successfully, and the current transaction uses the pre-shared secret that was established during the last connection.

There are some security downsides due to the 0-RTT resumption feature. One of the most common attack vectors is replaying attacks that can be caused when an adversary resends the initial packet; in specific scenarios, this may force the server to believe that the request came from a previously known client. Another security downside of the 0-RTT resumption is the partial failure of full-forward secrecy. If an adversary compromises the tokens, they can decrypt the 0-RTT communications sent by the user-agent.

Connection ID manipulation attacks  Connection ID manipulation attacks require an attacker to be positioned between the user-agent and the server. They can manipulate the Connection ID during the initial handshake where the client and server hello messages are exchanged. The handshake will proceed as normal and the server will assume that the connection is established, but the user-agent will fail to decrypt because the Connection ID is an input to the encryption key derivation process and the user-agent and server will compute different encryption keys. The user-agent will eventually time out and send an error message back to the server, communicating that the connection has been terminated. Since the client encrypts the error message to the server with the original encryption key, the server will fail to decrypt and will retain the connection state until the idle connection timeout expires (generally in 10 minutes).

When performed on a larger scale, the same attack may create a denial of service attack on the server, with multiple connections are retained untill the connection state expires. Another attack method to keep the connection alive would be to alter other parameters such as source-address tokens, thereby preventing clients from establishing any connection.

UDP amplification attack For a successful amplification attack, the adversary has to spoof the victim’s IP address and send a UDP request to the server. If the server sends back a more significant UDP response, the adversary can utilize this server behavior on a larger scale and create a DDOS attack scenario. 

Specifically, in QUIC, UDP amplification attacks occur when an adversary accepts an address validation token from the target and releases the IP address that was initially used to generate the token. An attacker can send a 0-RTT connection back to the server with the same IP address, which may have changed to point to a different endpoint. With the successful execution of this setup, the attacker can potentially instruct the server to send substantial traffic towards the victim server. To prevent this attack, HTTP/3 has rate-limiting features and short-lived validation tokens that can act as a compensating control to DDOS attacks, while partially mitigating the attack scenario.

Stream exhaustion attack  A stream exhaustion attack occurs when an adversary intentionally starts multiple connection streams, which may result in an endpoint getting exhausted. The attacker can utilize the exhaustion sequence by flooding the request repeatedly. While specific transport parameters can limit the number of concurrent active streams, there could be scenarios where a server configuration may have intentionally been set to a higher amount. The victim servers may be a target for such attacks due to the server’s protocol configurations to increase protocol performance.

Connection reset attack  Connection reset attacks are mainly sending stateless resets to the victims, thereby creating a possibility of denial of service attacks similar to TCP reset injection attacks. The potential attack vector is possible if an adversary can get a reset token generated for a connection with a specific connection ID. Finally, the attacker can use the generated token to reset an active connection having the same connection ID, causing the server to wait for the connection until the timeout happens. If this attack is made at a larger scale, the server has to considerably burn its resources just to wait for connections to complete.

QUIC version downgrade attack  The QUIC packet protection provides authentication and encryption for all the packets in the communication, except for the version negotiation packets. The version negotiation packets are designed to negotiate the QUIC’s version between the user-agent and the server. The feature may allow an attacker to perform version downgrades to a potentially insecure version of QUIC. This attack is currently not applicable, as there is only one version of QUIC, but it’s something to watch for in the future.

Lack of monitoring support Although several user-agents, servers, and reputable websites support HTTP3/QUIC, many network appliances such as reverse/forward proxies, load balancers, web application firewalls, and security event monitoring tools do not fully support HTTP/3. Unlike TCP, sockets are not needed in a QUIC connection, making it harder to detect hosts and malicious connections. A malicious attacker may be able to relay malicious payloads and perform data exfiltration attacks via QUIC and remain stealthy, as most detection tools would not detect QUIC traffic.

History of QUIC

In 2016, the internet engineering task force (IETF) began standardizing Google’s QUIC and recently announced IETF QUIC to be the backbone of the new HTTP/3 version. However, IETF QUIC has diverged significantly from the original QUIC design for both performance and security reasons.

Traditional web traffic over TCP requires a three-way handshake; QUIC uses UDP, which speeds up web traffic due to less delay as a result of fewer round trips and fewer packets sent. In addition to being faster, UDP provides several benefits, including connection migration, improved latency, congestion control, and built-in encryption. According to Google, “QUIC handshakes frequently require zero round trips before sending payload, as compared to 1–3 round trips for TCP+TLS.” The first connection requires one round trip, and the subsequent ones do not need any round trips. Also, because QUIC is intended for multiplexed operation, it deals better with packet loss than TCP and allows faster handshakes.

Google’s version of QUIC is now the gQUIC. HTTP/3 has evolved significantly from gQUIC with contributions and enhancements from the IETF working group. While HTTP/3 is technically the full application protocol, QUIC refers to the underlying transport protocol, which is not limited to serving web traffic. UDP is connectionless and not very reliable; QUIC overcomes these limitations by adding a TCP-like stack over UDP to add a reliable connection and resends with flow control features on top of it, while solving TCP’s head-of-line blocking issue.

HTTP/3 uses UDP, similar to how HTTP/2 uses TCP. Every connection has several parallel streams that are used to transfer data simultaneously over a single connection without impacting other streams. So, unlike TCP, the lost packets that are carrying data for a specific individual stream only impacts that particular stream. Each stream frame can then be immediately dispatched to that stream upon arrival, so streams without loss can continue to be reassembled in the application. This connection establishment strategy of QUIC is enabled by the combination of crypto and transport handshake.

Comparative analysis with HTTP/2

QUIC was designed to improve performance by mitigating issues of packet loss and latency with HTTP/2. While HTTP/2 uses a single TCP connection to each origin, this leads to the head-of-line blocking problem. For instance, a request’s object may get stalled behind another object that has experienced a loss, until it can be recovered. QUIC addresses this issue by pushing the stream layer of HTTP/2 down into the transport layer, thereby avoiding the issue at both application and transport layers. HTTP/3 also enables multiplexing, delivering a request independent of other connection requests, while integrating with the TLS directly. While HTTP/2 and HTTP/3 work in similar ways, below are some of the significant differences in features of HTTP/2 and HTTP/3.

From the network stack perspective, HTTP/2 extensively utilizes TLS 1.2+ in alignment with the HTTP standard, with underlying TCP acting as a transmission protocol. However, in HTTP/3, TLS 1.3 is by default used in addition to QUIC, with UDP being the transmission protocol. The below diagram illustrates where QUIC is located in the network protocol stack. In comparison, the previous version uses TLS 1.2 and utilizes congestion control loss recovery features of TCP, with HTTP/2 handling the multi-streaming capabilities.   

QUIC’s location in the network protocol stack Sandeep Jayashankar and Subin Thayyile Kandy

Figure 2: QUIC’s location in the network protocol stack.

Connection ID benefits

The TCP connections have been designed to utilize source and destination network entities (mainly addresses and ports) to identify a specific connection. However, a QUIC connection uses a connection ID, which is a 64-bit, randomly generated client identifier. This change is very beneficial for the current web technologies, as they are required to support the user’s mobility as the main factor. If a user moves from a Wi-Fi network to a cellular network, the HTTP/2 TCP protocol would need to establish a new connection based on the current address. However, since HTTP/3 QUIC protocol utilizes a random connection ID, a client-changing IP address on HTTP/3, when moving from a cellular network to Wi-Fi connection, will continue using the existing connection ID without interruption.

From a protocol perspective, the connection ID provides additional benefits. The server and user-agents can identify the original vs. the retransmission connections using the connection ID and avoid retransmission ambiguity issues prevalent in TCP.


QUIC has been getting full acceptance and browser support; significant websites like YouTube and Facebook have enabled it for faster page loads. As of this writing, only 4% of the top sites currently support QUIC. Microsoft has announced they will be shipping Windows with a general-purpose QUIC library, MsQuic, in the kernel to support various inbox features.

QUIC and HTTP/3 are designed to meet today’s goals of internet and network performance, reliability, and security. There have been significant improvements in security with mandated support for TLS 1.3, addressing the weaknesses with HTTP/2 and prior versions of HTTP.  The usage of end-to-end encryption during transit in HTTP/3 helps in defending against several privacy concerns with state actors and data aggregators. While there are some weaknesses, HTTP/3 will continue to evolve and is a significant improvement over to HTTP/2, both from performance and the security perspective. 


subin thayyile_kandy

Subin Thayyile Kandy is an industry veteran with more than a decade of experience in application security and offensive security. He has worked for several banking and financial organizations like Barclays and Capital One spanning both Defensive and Offensive security roles, and enjoys every aspect of it. When not consumed by the excitement in his work, He spend time with his growing family, and loves traveling and hiking.


Sandeep Jayashankar is an experienced Information Security Architect with a demonstrated history of supporting the finance industry to develop their applications and infrastructure securely. He has several years of experience in both offensive and defensive areas of a successful security program in an organization. He has led many successful ethical hacking operations against enterprise organizations and uncovered many loopholes in their security posture. He also has advised organizations to mitigate their threats and effectively implement their security programs. Sandeep also holds many certifications that are well-recognized and respected in the security industry. Some of the certifications include from Offensive Security (OSCP, OSCE, OSEE, OSWE), GIAC GMOB, and ISC2's CISSP.

Sandeep was also a Java Developer who developed many critical enterprise solutions and using different flavors of development platforms and design patterns. Sandeep also holds a Master's degree in Information Security from the Johns Hopkins University.