Browsers do all sort of clever things to render a website fast. But they are not omniscient, and because they lack data, they end up wasting precious seconds and bandwidth.
Most web sites could load a lot faster. Close to 40% faster in average, if only they managed to use better their available network capacity. This is one of the conclusions at which we arrive from the data that we present in this article. But first, so you know, we write this blog to learn about network performance of web applications, and to find ways of optimizing them.
This first post is just to present statistics and figures on some data we had the opportunity to peruse. The data are timing numbers as seen by Google Chrome dev tools while crawling close to a thousand sites in last December. At the bottom of the article, there is a section with a slightly better description of the data, and a link where you can fetch the data itself and the code for the numbers presented here.
If you would like to see different statistics or just have questions, don’t forget to comment below or interact with us through Twitter. You can also subscribe to our newsletter to be in the loop for practical web performance.
- This is the distribution of the number of requests that are made to the server when fetching a page with an empty cache.
- The mean number is 86 requests.
- The 25% percentile falls in 26 requests.
- The 75% percentile falls in 109 requests.
Normalized density is the height of the bar so that when you multiply it by its width, you get the frequency that the bar represents. For example, the first bar in the plot represents 0.0096 * 25 = 0.24, that is, 24% (which is not far from our 25% percentile with 26 requests).
- This is the sum of the transfer sizes for all the resources needed to assemble a page.
- For sites that enable some form of compression, the transfer size is the size of the compressed data.
- The mean number is 1.2 Mb (Mb=Megabytes).
- The lowest 25% uses 178 Kb or less.
- The biggest 25% uses more than 1.5 Mb.
- We say that an asset is uncompressed if as far as we can say it was a missed opportunity for compression:
- The uncompressed size of the asset is bigger than 2000 bytes and,
- the `Content-Encoding` header field is not in the set: `gzip`, `deflate`, `compress`.
- In average, less than three resources per page were uncompressed.
- The worst 25% had two uncompressed resources or more.
- Please notice that our data is biased towards performance-conscious operators, see at the [bottom](#data).
- This is the time that the browser needs to get the TCP socket ready.
- The mean time is 182 milliseconds.
- The best 25% falls below 32 milliseconds.
- The worst 25%, that is the 75% percentile, is above 202 milliseconds.
Notice that this time is determined not so much by the browser but by how far is the server to which the browser is trying to connect. Also notice that there are several peaks, what do you think is the cause?
- Assume that the user pressed “ENTER” in his browser’s interface. Also, assume that he or she has entered the direction correctly and therefore no redirects happen.
- The sub-sections below show how much time will pass before the browser can send the first byte of the first HTTP request.
- The browser needs to do a DNS lookup and a TCP connect.
- The mean time is 308 milliseconds.
- 25% of the sites manages to do it in less than 90 milliseconds.
- The 75% percentile falls at 405 milliseconds. In other words, the worst 25% uses 405 milliseconds or more.
- The browser needs to do a DNS lookup, a TCP connect, and a TLS handshake.
- The mean time to do all of this is 513 milliseconds.
- The best 25% manages in less than 50 milliseconds.
- The 75% percentile is at 969 milliseconds.
- The minimun in our dataset is 36 milliseconds.
- This is the green portion of the “timing” bar in Google Chrome’s dev-tools
- We calculate the mean waiting time across all resources for each site, before the onLoad event fires. Therefore, we get one mean for each site in our data-set, and we do statistics over those.
- The mean mean (repetition intended) waiting time is 119 milliseconds.
- The 25% percentile is at 26 milliseconds.
- The 75% percentile is at 156 milliseconds.
Now, this is the time waiting for a single request. The browser does many requests concurrently. So, even if the browser is waiting for one resource, it may be the case that is fetching other resources and therefore network capacity is being used anyway. Let’s find out if that’s the case next.
- This is the proportion of the time from the start of the fetch until the load event where the browser is not receiving data for any request, but the browser has sent some requests and is waiting for their responses. That is, there are requests in flight even if no data is being received for them.
- Ideally, we want to lower that proportion as much as possible.
- The mean of that proportion is 0.39. That is, in average, a browser spends 39% of the time until the load event without receiving data.
- The 25% percentile was at 0.24. That is, the best quarter of sites managed to not-use the network 24% of the time or less.
- The 75% percentile was at 0.52. That is, the worst 25% of the sites in our sample was not receiving data half of the time or more.
- Notice that due to discovery, there may be segments of time before the load event where there is neither requests in flight nor data transfer. We didn’t account for that time in this calculation.
Ok, we established that in the average case up to 40% of the time until the load event the browser is exclusively waiting. Is there a way to improve things? Absolutely, since there were cases where the percent of unused time was much lower. The question is which practical measures should be taken to generally improve the fraction of time that the browser is actually receiving data from the network. As all optimization tasks involving things that computers do, the general recipe is measure, tweak, measure.
We don’t have a one-click tool to measure this proportion, if you are interested, let us know. However, you can use our code to measure the fraction of time for your particular website. It’s in GitHub!
Once you have established that this is a problem, there are ways to deal with it. We are biased towards one particular way: use HTTP/2 PUSH as much as possible. You can, for example, serve the main assets of your website in one go. We have deployed it in our site, and we have measured that even with minimal optimization effort the percent of unused time is around 10%. Of course, traditional optimization techniques may also help. Share your experience!
The data was obtained by loading a few thousand sites using Google Chrome and collecting the .har files produced by Chrome Dev Tools. Google Chrome was run in an Amazon AWS instance of type T2 medium, and for each fetch, an empty cache was used. The sites were submitted by their operators to a performance assesment service, which introduces a selection bias towards site operators who take performance seriously.