Most people never heard about HTTP/2. It was published around 2015 and wasn’t almost noticed. Three years later and seems like 99% of websites are still using HTTP/1.1.
This new version of the HTTP protocol comes with lots of new features, almost all are for increasing the perceived performance for the user of any website. Configuring the web server for HTTP/2 is simple, as both Apache and Nginx have support for it from time ago. Also, any browser that does not support it will be gracefully downgraded to HTTP/1.1 without any problem or performance degradation.
The only obstacle that prevents people from using HTTP/2 is that works only with SSL encryption, with HTTPS. Almost all browsers have decided that they are not going to enable HTTP/2 unless it is through HTTPS.
To use SSL nowadays it is easier than ever. The HTTPS certificates have to be usually paid annually, but there is an initiative called LetsEncrypt that generates basic HTTPS certificates completely for free. And it works on all browsers. They have to be renewed every three months or less and we do have to prove that we own the domain and the web server. Letsencrypt has scripts for automating this that make this task super easy.
Performance Improvements on HTTP/2
The main difference with the previous version is unlimited parallelism. Until now browsers were doing four URL’s at a time as a maximum and they had to wait until any those requests finished before starting the next request.
It is done in this way because in HTTP/1.1 each request is a single TCP connection in its own. Keep-Alive helps reusing the same connection when a request ends, but still has to wait until it ends before asking a new resource. To avoid Denial Of Service attacks, it was established that browsers should limit the amount of concurrent connections.
Every TCP connection fights to get as much bandwidth as is possible. Network Cards, Switches and Routers distribute bandwidth along the different connections. If an application starts 2000 connections, in the total it will have 2000x more priority against programs that only make a single request at a time. This is considered bad parctice and it tends to saturate the network. Some of you probably remember eMule or BitTorrent programs saturating the whole network when anyone was running them. That is because P2P applications use thousands of concurrent connections and they generate a huge network load, consuming resources in a unfair way.
To avoid this maximum concurrent connections problem, those who make websites were kind of forced to bundle things as much as possible, so we use as less connections as we could. This is bad for HTTP standards, because caching policies now apply to the whole bundle instead, and whenever you want to change a line of CSS or a particular image of it, the whole bundle gets transmitted again to all users.
HTTP/2 has only a single TCP connection regardless of how many concurrent requests it handles, so it allows unlimited parallel downloads. Browsers have benefited from this, and they are removing the concurrency limit once they know they are over HTTP/2, so in practice we see about one hundred requests being handled concurrently.
So now bundling has become less important while still plays a role. We can go back to the old times when every small file was served independently without having to lose performance, although bundling still gets a small performance benefit even on HTTP/2.
Other benefits of HTTP/2
Aside of the request concurrency, HTTP/2 is no longer a text protocol but a binary one. This gets rid of part of the CPU used to parse HTTP data compared on text mode. Also it avoids redundant information to be transmitted whenever there are several requests, so there is less bandwith for HTTP messages with no content. Its TCP connection remains open for long periods, so consequent user clicks on the website will reuse the same TCP connection, leading to a quicker response.
Also supports content prioritization. The browser can ask for resources with different priorities; let’s say for example that the browser knows that a CSS resource is blocking the website from being rendered, but it also has other images to load that are not blocking but they would be nice to have. The browser can emit the image requests in low priority and the CSS in high. The web server should follow this priority and send the CSS before if possible. But if this CSS resource has to wait for a backend (PHP, Python, …), the web server may decide to send the images meanwhile.
They also added a header compression algorithm called HPACK. HTTP headers are a bit bulky compared with small contents and with this feature sending HTTP codes without content (like 304 Not Modified) becomes way more eficient.
I am already trying out HTTP/2 since time ago with great results. The only complex part was the SSL certificate that is a bit cumbersome to configure the first time.
So, what do you think? Are you using it already?