In the mid 90s there was a famous incident where an email administrator at a US University fielded a phone call from a professor who was complaining his department could only send emails 500 miles. The professor explained that whenever they tried to email anyone farther away their emails failed — it sounded like nonsense, but it turned out to actually be happening. To understand why, you need to realise that the speed of light actually has more impact on how the internet works than you may think. In the email case, the timeout for connections was set to about 6 milliseconds – if you do the maths that is about the time it takes for light to travel 500 miles.
We’ll be talking about trucks a lot in this blog post!
The time that it takes for a network connection to open across a distance is called latency, and it turns out that latency has a lot to answer for. Latency is one of the main issues that affects the speed of the web, and was one of the primary drivers for why Google started inventing HTTP/2 (it was originally called SPDY when they were working on it, before it became a web standard).
HTTP/2 is now an established standard and is seeing a lot of use across the web, but is still not as widespread as it could be across most site. It is an easy opportunity to improve the speed of your website, but it can be fairly intimidating to try to understand it.
In this post I hope to provide an accessible top-level introduction to HTTP/2, specifically targeted towards SEOs. I do brush over some parts of the technical details and don’t cover all the features of HTTP/2, but my aim here isn’t to give you an exhaustive understanding, but instead to help you understand the important parts in the most accessible way possible.
HTTP 1.1 – The Current Norm
Currently, when request a web page or other resource (such as images, scripts, CSS files etc.), your browser speaks HTTP to a server in order to communicate. The current version is HTTP/1.1, which has been the standard for the last 20 years, with no changes.
Anatomy of a Request
We are not going to drown in the deep technical details of HTTP too much in this post, but we are going to quickly touch on what a request looks like. There are a few bits to a request:
The top line here is saying what sort of request this is (GET is the normal sort of request, POST is the other main one people know of), and what URL the request is for (in this case /anchorman/) and finally which version of HTTP we are using.
The second line is the mandatory ‘host’ header which is a part of all HTTP 1.1 requests, and covers the situation that often a single webserver may be hosting multiple websites and it needs to know which are you are looking for.
Finally there will a variety of other headers, which we are not going to get into. In this case I’ve shown the User Agent header which indicates which sort of device and software (browser) you are using to connect to the website.
HTTP = Trucks!
In order to help explain and understand HTTP and some of the issues, I’m going to draw an analogy between HTTP and … trucks! We are going to imagine that an HTTP request being sent from your browser is a truck that has to drive from your browser over to the server:
A truck represents an HTTP request/response to a server
In this analogy, we can imagine that the road itself is the network connection (TCP/IP, if you want) from your computer to the server:
The road is a network connection – the transport layer for our HTTP Trucks
Then a request is represented by a truck, that is carrying a request in it:
HTTP Trucks carry a request from the browser to the server
The response is the truck coming back with a response, which in this case is our HTML:
HTTP Trucks carry a response back from the server to the browser
“So what is the problem?! This all sounds great, Tom!” – I can hear you all saying. The problem is that in this model, anyone can stare down into the truck trailers and see what they are hauling. Should an HTTP request contain credit card details, personal emails, or anything else sensitive anybody can see your information.
HTTP Trucks aren’t secure – people can peek at them and see what they are carrying
HTTPS
HTTPS was designed to combat the issue of people being able to peek into our trucks and see what they are carrying.
Importantly, HTTPS is essentially identical to HTTP – the trucks and the requests/responses they transport at the same as they were. The response codes and headers are all the same.
The difference all happens at the transport (network) layer, we can imagine it as a over our road:
In HTTPS, requests & responses are the same as HTTP. The road is secured.
In the rest of the article, I’ll imagine we have a tunnel over our road, but won’t show it – it would be boring if we couldn’t see our trucks!
Impact of Latency
So the main problem with this model is related to the top speed of our trucks. In the 500-mile email introductory story we saw that the speed of light can have a very real impact on the workings of the internet.
HTTP Trucks cannot go fast than the speed of light.
HTTP requests and many HTTP responses tend to be quite small. However, our trucks can only travel at the speed of light, and so even these small requests can take time to go back and forth from the user to the website. It is tempting to think this won’t have a noticeable impact on website performance, but it is actually a real problem…
HTTP Trucks travel at a constant speed, so longer roads mean slower responses.
The farther the distance of the network connection between a user’s browser and the web server (the length of our ‘road’) the farther the request and response have to travel, which means they take longer.
Now consider that a typical website is not a single request and response, but is instead a sequence of many requests and responses. Often a response will mean more requests are required – for example, an HTML file probably references images, CSS files and JavaScript files:
Some of these files then may have further dependencies, and so on. Typically websites may be 50-100 separate requests:
Web pages nowadays often require 50-100 separate HTTP requests.
Let’s look at how that may look for our trucks…
Send a request for a web page:
We send a request to the web server for a page.
Request travels to server:
The truck (request) may take 50ms to drive to the server.
Response travels back to browser:
And then 50ms to drive back with the response (ignoring time to compile the response!).
The browser parses the HTML response and realises there are a number of other files that are needed from the server:
After parsing the HTML, the browser identifies more assets to fetch. More requests to send!
Limit of HTTP/1.1
The problem we now encounter is that there are several more files we need to fetch, but with an HTTP/1.1 connection each road can only handle a single truck at a time. Every HTTP request needs its own TCP (networking) connection, and each truck can only carry one request at a time.
Each truck (request) needs its own road (network connection).
Furthermore, building a new road, or opening a new networking connection also requires a round trip. In our world of trucks we can liken this to needing a stream roller to first lay the road and then add our road markings. This is another whole round trip which adds more latency:
New roads (network connections) require work to open them.
This means another whole round trip to open new connections.
Typically browsers open around 6 simultaneous connections at once:
Browsers usually open 6 roads (network connections).
However, if we are looking at 50-100 files needed for a webpage we still end up in the situation where trucks (requests) have to wait their turn. This is called ‘head of line blocking’:
Often trucks (requests) have to wait for a free road (network connection).
If we look at the waterfall diagram for a page (this example this HTTP/2 site) of a simple page that has a CSS file and lot of images you can see this in action:
Waterfall diagrams highlight the impact of round trips and latency.
In the diagram above, the orange and purple segments can be thought of as our stream rollers, where new connections are made. You can see initially there is just one connection open (line 1), and another connection being opened. Line 2 then re-uses the first connection and line 3 is the first request over the second connection. When those complete lines 4 & 5 are the next two images.
At this point the browser realises it will need more connections so four more are opened and then we can see requests are going in batches of 6 at a time corresponding with the 6 roads or network connections that are open.
Latency vs Bandwidth
In the waterfall diagram above, each of these images may be small but each requires a truck to come and fetch it. This means lots of round trips, and given we can only run 6 simultaneously at a time there is a lot of time spent with requests waiting.
It is sometimes difficult to understand the difference between bandwidth and latency. Bandwidth could be thought of as the load capacity of our trucks, where each truck could carry more. This often doesn’t help with webpage times though, given each request and response cannot share a truck with another request. This is why it has been shown that increasing bandwidth has a limited impact on the load time of pages. This was shown in research conducted by Mike Belshe at Google which is discussed in this article from Googler Ilya Grigorik:
The reality was clear that in order to improve the performance of the web, the issue of latency would need to be addressed. The research above was what led to Google developing the SPDY protocol which later turned into HTTP/2.
Improving the impact of latency
In order to improve the impact that latency has on website load times, there are various strategies that have been employed. One of these is ‘sprite maps’ which take lots of small images and jam them together into single files:
Sprite maps are a trick used to reduce the number of trucks (requests) needed.
The advantage of sprite maps is that they can all be put into one truck (request/response) as they are just a single file. Then clever use of CSS can display just the portion of the image that corresponds to the desired image. One file means only a single request and response are required to fetch them, which reduces the number of round trips required.
Another thing that helps to reduce latency is using a CDN platform, such as CloudFlare or Fastly, to host your static assets (images, CSS files etc. – things that are not dynamic and the same for every visitor) on servers all around the world. This means that the round trips for users can be along a much shorter road (network connection) because there will be a nearby server that can provide them with what they need.
CDNs have servers all around the world, can make the required roads (network connections) shorter.
CDNs also provide a variety of other benefits, but latency reduction is a headline feature.
HTTP/2 – The New World
So hopefully, you have now realised that HTTP/2 can help reduce latency and dramatically improve the performance of pages. How does it go about it!
Introducing Multiplexing – More trucks to the rescue!
With HTTP/2 we are allowed multiplexing, which essentially means we are allowed to have more than one truck on each road:
With HTTP/2 a road (network connection) can handle many trucks (requests/responses).
We can immediately see the change in behaviour on a waterfall diagram – compare this with the one above (not the change in the scale too – this is a lot faster):
We now only need one road (connection) then all our trucks (requests) can share it!
The exact speed benefits you may get depend on a lot of other factors, but by removing the problem of head of line blocking (trucks having to wait) we can immediately get a lot of benefits, for almost no cost to us.
Same old trucks
With HTTP/2 our trucks and their contents stay essentially the same as they they always were, we can just imagine we have a new traffic management system.
Requests look as they did before:
The same response codes exist and mean the same things:
Because the content of the trucks doesn’t change, this is great news for implementing HTTP/2 – your web platform or CMS does not need to be changed and your developers don’t need to write any code! We’ll discuss this below.
Server Push
A much anticipated feature of HTTP/2 is ‘Server Push’ which allows a server to respond to a single request with multiple responses. Imagine a browser requests an HTML file but the server knows that that means the server will need a specific CSS file and a specific JS file as well. Then the server can just send those straight back, without needing them to be requested:
Server Push: A single truck (request) is sent…
Server Push: … but multiple trucks (responses) are sent back.
The benefit is obvious- it removes another whole round trip for each resource that the server can ‘anticipate’ that the client will need.
The downside is that at the moment this is often implemented badly, and it can mean the server sends trucks that the client doesn’t need (as it has cached the response from earlier) which means you can make things worse.
For now, unless you are very sure you know what you are doing you should avoid server push.
Implementing HTTP/2
Ok – this sounds great, right? Now you should be wondering how you can turn it on!
The most important thing is to understand that because the requests and responses are the same as they always were, you do not need to update the code on your site at all. You need to update your server to speak HTTP/2 – and then it will do the new ‘traffic management’ for you.
If that seems hard (or if you already have one) you can instead use a CDN to help you deploy HTTP/2 to your users. Something like CloudFlare, or Fastly (my favourite CDN – it requires more advanced knowledge to setup but is super flexible) would sit in front of your webserver and speaking HTTP/2 to your users:
A CDN can speak HTTP/2 for you whilst your server speaks HTTP/1.1.
Because the CDN will cache your static assets, like images, CSS files, Javascript files and fonts, you still get the benefits of HTTP/2 even though your server is still in a single truck world.
HTTP/2 is not another migration!
It is important to realise that to get HTTP/2 you will need to already have HTTPS, as all the major browsers will only speak HTTP/2 when using a secure connection:
HTTP/2 requires HTTPS
However, setting up HTTP/2 does not require a migration in the same way as HTTPS did. With HTTPS your URLs were changing from http://example.com to https://example.com and you required 301 redirects, and a new Google Search Console account and a week long meditation retreat to recover from the stress.
With HTTP/2 your URLs will not change, and you will not require redirects or anything like that. For browsers and devices that can speak HTTP/2 they will do that (it is actually the guy in the steamroller who communicates that part – but that is a-whole-nother story..!), and other devices will fall back to speaking HTTP/1.1 which is just fine.
We also know that Googlebot does not speak HTTP/2 and will still use HTTP/1.1:
https://moz.com/blog/challenging-googlebot-experiment
However, don’t despair – Google will still notice that you have made things better for users, as we know they are now using usage data from Chrome users to measure site speed in a distributed way:
https://moz.com/blog/google-chrome-usage-data-measure-site-speed
This means that Google will notice the benefit you have provided to users with HTTP/2, and that information will make it back into Google’s evaluation of your site.
Detecting HTTP/2
If you are interested in whether a specific site is using HTTP/2 there are a few ways you can go about it.
My preferred approach is to turn on the ‘Protocol’ column in the Chrome developer tools. Open up the dev tools, go to the ‘Network’ tab and if you don’t see the column then right click to add it from the dropdown:
Alternatively, you can install this little Chrome Extension which will indicate if a site is using it (but won’t give you the breakdown for every connection you’ll get from doing the above):
Slide Deck
If you would prefer to consume this as a slide deck, then you can find it on Slideshare. Feel free to re-use the deck in part or its entirety, provided you provide attribution (@TomAnthonySEO):
An introduction to HTTP/2 & Service Workers for SEOs from Tom Anthony
Wrap Up
Hopefully, you found this useful. I’ve found the truck analogy makes something, that can seem hard to understand, somewhat more accessible. I haven’t covered a lot of the intricate details of HTTP/2 or some of the other functionality, but this should help you understand things a little bit better.
I have, in discussions, extended the analogy in various ways, and would love to hear if you do too! Please jump into the comments below for that, or to ask a question, or just hit me up on Twitter.