Performance measurements… and the people who love them
⚠️ WARNING ⚠️ This blog post contains graphic depictions of probability. Reader discretion is advised.
Measuring performance is tricky. You have to think about accuracy and precision. Are your sampling rates high enough? Could they be too high?? How much metadata does each recording need??? Even after all that, all you have is raw data. Eventually for all this raw performance information to be useful, it has to be aggregated and communicated. Whether it's in the form of a dashboard, customer report, or a paged alert, performance measurements are only useful if someone can see and understand them.
This post is a collection of things I've learned working on customer performance escalations within Cloudflare and analyzing existing tools (both internal and commercial) that we use when evaluating our own performance. A lot of this information also comes from Gil Tene's talk, How NOT to Measure Latency. You should definitely watch that too (but maybe after reading this, so you don't spoil the ending). I was surprised by my own blind spots and which assumptions turned out to be wrong, even though they seemed "obviously true" at the start. I expect I am not alone in these regards. For that reason this journey starts by establishing fundamental definitions and ends with some new tools and techniques that we will be sharing as well as the surprising results that those tools uncovered.
Check your verbiage
</a>
</div>
<p>So ... what is performance? Alright, let's start with something easy: definitions. "Performance" is not a very precise term because it gets used in too many contexts. Most of us as nerds and engineers have a gut understanding of what it means, without a real definition. We can't <i>really</i> measure it because how "good" something is depends on what makes that thing good. "Latency" is better ... but not as much as you might think. Latency does at least have an implicit time unit, so we <i>can</i> measure it. But ... what is latency? There are lots of good, specific examples of measurements of latency, but we are going to use a general definition. Someone starts something, and then it finishes — the elapsed time between is the latency.</p>
<figure>
<img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1r4blwH5oeloUdXoizuLB4/f58014b1b4b3715f54400e6b03c60ea7/image7.png" />
</figure><p>This seems a bit reductive, but it’s a surprisingly useful definition because it gives us a key insight. This fundamental definition of latency is based around the client's perspective. Indeed, when we look at our internal measurements of latency for health checks and monitoring, they all have this one-sided caller/callee relationship. There is the latency of the caching layer from the point of view of the ingress proxy. There’s the latency of the origin from the cache’s point of view. Each component can measure the latency of its upstream counterparts, but not the other way around. </p><p>This one-sided nature of latency observation is a real problem for us because Cloudflare <i>only</i> exists on the server side. This makes all of our internal measurements of latency purely estimations. Even if we did have full visibility into a client’s request timing, the start-to-finish latency of a request to Cloudflare isn’t a great measure of Cloudflare’s latency. The process of making an HTTP request has lots of steps, only a subset of which are affected by us. Time spent on things like DNS lookup, local computation for TLS, or resource contention <i>do</i> affect the client’s experience of latency, but only serve as sources of noise when we are considering our own performance.</p><p>There is a very useful and common metric that is used to measure web requests, and I’m sure lots of you have been screaming it in your brains from the second you read the title of this post. ✨Time to first byte✨. Clearly this is the answer, right?! But ... what is “Time to first byte”?</p>
<div>
<h2>TTFB mine</h2>
<a href="#ttfb-mine">
</a>
</div>
<p>Time to first byte (TTFB) on its face is simple. The name implies that it's the time it takes (on the client's side) to receive the first byte of the response from the server, but unfortunately, that only describes when the timer should end. It doesn't say when the timer should start. This ambiguity is just one factor that leads to inconsistencies when trying to compare TTFB across different measurement platforms ... or even across a single platform because there is no <i>one</i> definition of TTFB. Similar to “performance”, it is used in too many places to have a single definition. That being said, TTFB is a very useful concept, so in order to measure it and report it in an unambiguous way, we need to pick a definition that’s already in use.</p><p>We have mentioned TTFB in other blog posts, but <a href="https://blog.cloudflare.com/ttfb-is-not-what-it-used-to-be/"><u>this one</u></a> sums up the problem best with “Time to first byte isn’t what it used to be.” You should read that article too, but the gist is that one popular TTFB definition used by browsers was changed in a confusing way with the introduction of <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Status/103"><u>early hints</u></a> in June 2022. That post and <a href="https://blog.cloudflare.com/tag/ttfb/"><u>others</u></a> make the point that while TTFB is useful, it isn’t the best direct measurement for web performance. Later on in this post we will derive why that’s the case.</p><p>One common place <i>we</i> see TTFB used is our customers’ analysis comparing Cloudflare's performance to our competitors through <a href="https://www.catchpoint.com/"><u>Catchpoint</u></a>. Customers, as you might imagine, have a vested interest in measuring our latency, as it affects theirs. Catchpoint provides several tools built on their global Internet probe network for measuring HTTP request latency (among other things) and visualizing it in their web interface. In an effort to align better with our customers, we decided to adopt Catchpoint’s terminology for talking about latency, both internally and externally.</p>
<div>
<h2>Catchpoint catch-up</h2>
<a href="#catchpoint-catch-up">
</a>
</div>
<p>While Catchpoint makes things like TTFB easy to plot over time, the visualization tool doesn't give a definition of what TTFB is, but after going through all of their technical blog posts and combing through thousands of lines of raw data, we were able to get functional definitions for TTFB and other composite metrics. This was an important step because these metrics are how our customers are viewing our performance, so we all need to be able to understand exactly what they signify! The final report for this is internal (and long and dry), so in this post, I'll give you the highlights in the form of colorful diagrams, starting with this one.</p>
<figure>
<img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5bB3HmSrIIhQ2AzVpheJWa/8d2b73f3f2f0602217daaf7fea847e11/image6.png" />
</figure><p>This diagram shows our customers' most commonly viewed client metrics on Catchpoint and how they fit together into the processing of a request from the server side. Notice that some are directly measured, and some are calculated based on the direct measurements. Right in the middle is TTFB, which Catchpoint calculates as the sum of the DNS, Connect, TLS, and Wait times. It’s worth noting again that this is not <i>the</i> definition of TTFB, this is just Catchpoint’s definition, and now ours.</p><p>This breakdown of HTTPS phases is not the only one commonly used. Browsers themselves have a standard for measuring the stages of a request. The diagram below shows how most browsers are reporting request metrics. Luckily (and maybe unsurprisingly) these phases match Catchpoint's very closely.</p>
<figure>
<img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ZouyuBQV7XgER2kqhMy8r/04f750eef44ba12bb6915a06eac532ca/image1.png" />
</figure><p>There are some differences beyond the inclusion of things like <a href="https://html.spec.whatwg.org/#applicationcache"><u>AppCache</u></a> and <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/Redirections"><u>Redirects</u></a> (which are not directly impacted by Cloudflare's latency). Browser timing metrics are based on timestamps instead of durations. The diagram subtly calls this out with gaps between the different phases indicating that there is the potential for the computer running the browser to do things that are not part of any phase. We can line up these timestamps with Catchpoint's metrics like so:</p>
<figure>
<img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4TvwOuTxWvMBxKGQQTUfZc/a8105d77725a9fa0d3e5bf6a115a13a5/Screenshot_2025-05-15_at_11.31.46.png" />
</figure><p>Now that we, our customers, and our browsers (with data coming from <a href="https://en.wikipedia.org/wiki/Real_user_monitoring"><u>RUM</u></a>) have a common and well-defined language to talk about the phases of a request, we can start to measure, visualize, and compare the components that make up the network latency of a request. </p>
<div>
<h2>Visual basics</h2>
<a href="#visual-basics">
</a>
</div>
<p>Now that we have defined what our key values for latency are, we can record numbers and put them in a chart and watch them roll by ... except not directly. In most cases, the systems we use to record the data actively prevent us from seeing the recorded data in its raw form. Tools like <a href="https://prometheus.io/"><u>Prometheus</u></a> are designed to collect pre-aggregated data, not individual samples, and for a good reason. Storing every recorded metric (even compacted) would be an enormous amount of data. Even worse, the data loses its value exponentially over time, since the most recent data is the most actionable.</p><p>The unavoidable conclusion is that some aggregation has to be done before performance data can be visualized. In most cases, the aggregation means looking at a series of windowed percentiles over time. The most common are 50th percentile (median), 75th, 90th, and 99th if you're really lucky. Here is an example of a latency visualization from one of our own internal dashboards.</p>
<figure>
<img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/lvjAR41mTJf2d5Vdg5SwT/19ff931587790b1fb7fbcc317ab83a5e/image8.png" />
</figure><p>It clearly shows a spike in latency around 14:40 UTC. Was it an incident? The p99 jumped by 1300% (500ms to 6500</p><p>ms) for multiple minutes while the p50 jumped by more than 13600% (4.4ms to 600ms). It is a clear signal, so something must have happened, but what was it? Let me keep you in suspense for a second while we talk about statistics and probability.</p>
<div>
<h2>Uncooked math</h2>
<a href="#uncooked-math">
</a>
</div>
<p>Let me start with a quote from my dear, close, personal friend <a href="https://www.youtube.com/watch?v=xV4rLfpidIk"><u>@ThePrimeagen</u></a>:</p>
<figure>
<img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/I8VbrcSjVSKY1i7fbVEMl/8108e25e78c1ee5356bbd080c467c056/Screenshot_2025-05-15_at_11.33.40.png" />
</figure><p>It's a good reminder that while statistics is a great tool for providing a simplified and generalized representation of a complex system, it can also obscure important subtleties of that system. A good way to think of statistical modeling is like lossy compression. In the latency visualization above (which is a plot of TTFB over time), we are compressing the entire spectrum of latency metrics into 4 percentile bands, and because we are only considering up to the 99th percentile, there's an entire 1% of samples left over that we are ignoring! </p><p>"What?" I hear you asking. "P99 is already well into perfection territory. We're not trying to be perfectionists. Maybe we should get our p50s down first". Let's put things in perspective. This zone (<a href="http://www.cloudflare.com/"><u>www.cloudflare.com</u></a>) is getting about 30,000 req/s and the 99th percentile latency is 500 ms. (Here we are defining latency as “Edge TTFB”, a server-side approximation of our now official definition.) So there are 300 req/s that are taking longer than half a second to complete, and that's just the portion of the request that <i>we</i> can see. How much worse than 500 ms are those requests in the top 1%? If we look at the 100th percentile (the max), we get a much different vibe from our Edge TTFB plot.</p>
<figure>
<img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/NDvJObDLjy5D8bKIEhsjS/10f1c40940ba41aae308100c7f374836/image12.png" />
</figure><p>Viewed like this, the spike in latency no longer looks so remarkable. Without seeing more of the picture, we could easily believe something was wrong when in reality, even if something is wrong, it is not localized to that moment. In this case, it's like we are using our own statistics to lie to ourselves. </p>
<div>
<h2>The top 1% of requests have 99% of the latency</h2>
<a href="#the-top-1-of-requests-have-99-of-the-latency">
</a>
</div>
<p>Maybe you're still not convinced. It feels more intuitive to focus on the median because the latency experienced by 50 out of 100 people seems more important to focus on than that of 1 in 100. I would argue that is a totally true statement, but notice I said "people"<sup> </sup>and not "requests." A person visiting a website is not likely to be doing it one request at a time.</p><p>Taking <a href="http://www.cloudflare.com/"><u>www.cloudflare.com</u></a> as an example again, when a user opens that page, their browser makes more than <b>70</b> requests. It sounds big, but in the world of user-facing websites, it’s not that bad. In contrast, <a href="http://www.amazon.com/"><u>www.amazon.com</u></a> issues more than <b>400</b> requests! It's worth noting that not all those requests need to complete before a web page or application becomes usable. That's why more advanced and browser-focused metrics exist, but I will leave a discussion of those for later blog posts. I am more interested in how making that many requests changes the probability calculations for expected latency on a per-user basis. </p><p>Here's a brief primer on combining probabilities that covers everything you need to know to understand this section.</p><ul><li><p>The probability of two things happening is the probability of the first happening multiplied by the probability of the second thing happening. $$P(X\cap Y )=P(X) \times P (Y)$$</p></li><li><p>The probability of something in the $X^{th}$ percentile happening is $X\%$. $$P(pX) = X\%$$</p></li></ul><p>Let's define $P( pX_{N} )$ as the probability that someone on a website with $N$ requests experiences no latencies >= the $X^{th}$ percentile. For example, $P(p50_{2})$ would be the probability of getting no latencies greater than the median on a page with 2 requests. This is equivalent to the probability of one request having a latency less than the $p50$ and the other request having a latency less than the $p50$. We can use the first identities above. </p><p>$$\begin{align}
P( p50_{2}) &= P\left ( p50 \cap p50 \right ) \ &= P( p50) \times P\left ( p50 \right ) \ &= 50%^{2} \ &= 25% \end{align}$$
We can generalize this for any percentile and any number of requests. $$P( pX_{N}) = X%^{N}$$
For www.cloudflare.com and its 70ish requests, the percentage of visitors that won’t experience a latency above the median is
$$\begin{align} P( p50_{70}) &= 50%^{70} \ &\approx 0.000000000000000000001% \end{align}$$
This vanishingly small number should make you question why we would value the $p50$ latency so highly at all when effectively no one experiences it as their worst case latency.
So now the question is, what request latency percentile should we be looking at? Let’s go back to the statement at the beginning of this section. What does the median person experience on www.cloudflare.com? We can use a little algebra to solve for that.
$$\begin{align} P( pX_{70}) &= 50% \ X^{70} &= 50% \ X &= e^{ \frac{ln\left ( 50% \right )}{70}} \ X &\approx 99% \end{align}$$
This seems a little too perfect, but I am not making this up. For www.cloudflare.com, if you want to capture a value that’s representative of what the median user can expect, you need to look at $p99$ request latency. Extending this even further, if you want a value that’s representative of what 99% of users will experience, you need to look at the 99.99th percentile!