HTTP/2 – Improved Browsing Experience Or Implementation Nightmare?

On 17th February 2015, the 17th draft version of Hypertext Transfer Protocol (HTTP) version 2 Standard was released.

HTTP/2 offers many improvements over its predecessor, however, it also creates many new challenges. These challenges need to start being considered now, in order to prevent organisations from being caught out over the coming years.

Key Points:

The HTTP/2 specification enables a more efficient use of network resources and creates a reduced perception of latency for the end user. It achieves this by:

  • Introducing header field compression – reducing the size of the HTTP headers within transit.
  • Allowing multiple concurrent HTTP exchanges within a single TCP session – by utilising streaming and multiplexing technologies, introducing the ability for the server to pre-emptively push content to the client.

The current version also proposes the use of compulsory end-to-end encryption between the server and the client. Although this helps mitigate against a number of risks, such as Man-In-The-Middle (MITM) attacks, it is a controversial topic within businesses and ISPs. It has many benefits for the end user; however it renders services such as proxies, load balancing, web caching and profitable meta-data scrapers almost redundant.

Benefits:

  • Using the Streaming and Multiplexing technologies allows for a bi-directional sequencing of frames to be exchanged between the client and server within a single connection.

– This allows for multiple concurrently open streams within one connection, with either endpoint interleaving frames from multiple streams.

– Typically connection tracking devices such as firewalls and NAT have to maintain a large number of stateful connections. The ability to have a single TCP connection reduces the overhead on these devices and could potentially create costs savings of up to 75% on larger websites by reducing the requirements for large scale firewalls.

– Streams can be established and used unilaterally or shared by either the client or the server. This allows either end to terminate the streams; ensuring unused sessions are not left open or idle on the web servers.

-The addition of Flow Control has been built in to the protocol, moving it away from networking devices. This allows for virtual hosts to be prioritised within the web farms, rather than current IP based technologies, which gives greater and much more granular control.

-Push technologies, built into the protocol, will allow for more intelligence to be built into the web servers to prepare and push information to the client based on typical requirements. For example, if a user requests the index page of the website, the server could pre-emptively push the next required file, such as the website’s CSS file, to the user. This reduces website response time and reduces the appearance of latency on a web connection.

Challenges:

  • Distributed Denial of Service (DDoS) attacks must considered when implementing HTTP/2. Although HTTP/2 is more efficient on the network side, it requires more computational power to compress, encrypt and pre-empt user activity – something that could be exhausted with a DDOS attack.
  • The use of header compression and flow control technologies depend on the reservation of resources, for storing a greater amount of state data. These memory commitments must be strictly bound in order to prevent buffer overflow attacks.
  • Currently deployed in-line proxy and caching services will no longer be applicable, as the transfer of data will be encrypted and in a binary format. Any caching or proxying of content will not be able to function without decrypting and reading the binary information before manipulating the data path.
  • Server’s hardware specifications will need to increase to allow for the additional resources required by HTTP/2.
  • Layer 4 firewalls and Intrusion Detection services will also be rendered useless, as their functionality expects multiple HTTP connections and would not expect HTTP/2’s single connection model. Also, the end-to-end encryption of connections will prevent firewalls and IDS/IDP systems from protecting their services, due to not being able to inspect the flow data.

Conclusions:

There are a number of early deployments of this type of technology through SPDY (Speedy) which are showing good results and the advantages seem to outweigh the concerns, there will need to be a technology shift from the networks infrastructure to the server teams.