Adding Complementary Performance Data to Your Site

by Ethan Gardner published on

Getting performance data from real users can transform assumptions about how a user experiences a site into objective, actionable information.

In the last two years, there has been increased awareness of web performance thanks to Google's Core Web Vitals (CWV) metrics. These metrics are great and have nearly universal application, but it can also be helpful to add information specific to your site. Fortunately, it's easier than ever to collect this data thanks to features that have made their way into browsers in recent years.

Critical Landmarks

Using the PerformanceObserver API, you can get performance information about page navigation, in-page landmarks like how long it takes to parse the HTML, how long it takes to download and render specific resources, and it's even possible to measure server performance with the Server-Timing header.

It's possible you have a dedicated Real User Monitoring (RUM) vendor that takes care of logging this information for you, but if not, most site analytics packages allow you to define and log custom data.

There has been much written about CWV, but for the purpose of this article, I'll talk about data that complements CWV and make comparisons where necessary. If you want to track CWV, there is a package and accompanying article that covers it in detail.

Note: In some cases below, I'm using a placeholder function called sendToAnalytics to indicate where you might want to send the data to your site analytics to get the measurements from real users. You'll need to check the docs of your specific analytics software to see how to send data for your use case.

Implementation

Using PerformanceObserver, it's possible to get precise information about events that occur during the page lifecycle.

Browsers log activities, known as entries, automatically for certain things like the resource timing entry that happens when files like images are loaded or the navigation timing entry that happens when navigation occurs. Every browser implements a different set of entries. You can run PerformanceObserver.supportedEntryTypes in the browser's console to see what is supported in your browser.

It's also possible to add your own entries, which you'll see below. The ones that we'll deal with are:

  • performance.mark which records a single, named timestamp
  • performance.measure which calculates the difference between 2 performance marks i.e. a start and end mark.
  • PerformanceElementTiming - will record timing information about a specific element on the page, such as an image or text element like <h1> by using an elementtiming attribute on an element.

Server Timing

The aforementioned web-vitals package has a method to track Time to First Byte (TTFB), which is a measure of how long it takes between the initial navigation and when a response is received from the server. This is considered a backend metric because it's impacted by network latency, the database queries that are run as part of the page's construction, and server memory.

However, TTFB is an aggregate metric. If you want more granularity to break down where time is being spent on the backend, you can use the Server-Timing HTTP header.

import { sendToAnalytics } from "./sendToAnalytics";

export const trackServerTiming = (threshold = 200) => {
try {
for (const { name: url, serverTiming } of performance.getEntriesByType(
"navigation"
)) {
for (const { name, duration } of serverTiming) {
// we only care about data beneath the threshold
if (duration > threshold) {
sendToAnalytics(url, name, duration);
}
}
}
} catch (e) {
console.error(e);
}
};

Head Blocking Time

As it relates to performance, the <head> of the document is especially sensitive and directly affects rendering. For example, CSS is a render blocking resource when it matches the device media attribute, and so is JavaScript when it lacks async, defer, or type="module" on the <script> tag.

There's even more of a performance hit when the assets are loaded from a third-party domain like Google Fonts or any of the popular JS CDNs. A page cannot render until the blocking assets in the head are downloaded, parsed, and in the case of blocking JavaScript, executed as well.

How much blocking code there is and where it's served from play a part in the initial render performance, but even the order of the tags matters. Harry Roberts has talked extensively about the ideal order for tags in the <head> and even has a diagnostic stylesheet to pinpoint issues. These resources will help remove the bottlenecks, and the PerformanceObserver will help track the before-and-after metrics.

This is done by setting performance marks at the start and end of the <head> and then using performance.measure to calculate the difference between the start and end marks.

<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width,initial-scale=1">
<title>Example page</title>
<meta name="description" content="A great example of performance marks">
<script>
if('performance' in window) {
performance.mark('htmhell:documenthead:start');
}
</script>
<!-- ...the rest of the tags for the head. e.g. style, link, script, and meta tags.
Probably many lines...
-->

<script>
if('performance' in window) {
performance.mark('htmhell:documenthead:end');
performance.measure('htmhell:documenthead', 'htmhell:documenthead:start', 'htmhell:documenthead:end');
}
</script>
</head>

Sometimes third-parties add their own performance marks, so prefixing your marks as a namespace of sorts helps keep the chatter down in your analytics data. I like to follow the convention of {site}:{component}:{description}. That way, you can do some programmatic measurements by filtering through the groups with the naming convention.

Important Elements

There's usually a piece of content that is considered the most important. It depends on the type of site and the template for what the most important element might be. Here are some examples of what is likely to be the most important element:

  • Media: article headline or main image.
  • E-commerce: product image on the product details page.
  • Search pages: the link for the first result.
  • Home page of a marketing site: hero image.
  • Video site: the video poster image.

CWV has a metric called Largest Contentful Paint (LCP), but that is the largest visual element and not necessarily the most important. Also, the LCP element can change based on a number of factors such as viewport size or be stolen away by prominent 3rd party content like ads or privacy consent modals.

Element Timing is currently only implemented in Chromium-based browsers, but it's worth tracking considering the significant market share that these browsers have, and more browsers should support it in the near future. It's as simple as adding an elementtiming attribute to the element you want to track.

According to the spec, the relevant timing value depends on whether the element with the elementtiming attribute is an image or a text node.

For an image, it's the loadTime property that is important, while for a text node, it's the renderTime property that gives the relevant information.

This code below is responsible for logging both the head blocking time measurement we set earlier, as well as the element timing.

If you import the function and call trackPerfMarks("htmhell"), it will report all the elementtiming entries but will limit the measure and mark to only entries that were prefixed with htmhell.

import { sendToAnalytics } from "./sendToAnalytics";

export const trackPerfMarks = (prefix) => {
const types = ["measure", "mark", "element"];

types.forEach((type) => {
try {
new PerformanceObserver((list) => {
list.getEntries().forEach((entry) => {
if (type === "element") {
sendToAnalytics(
window.location.href,
entry.identifier,
entry.loadTime || entry.renderTime
);
} else {
if (entry.name.indexOf(prefix) !== -1) {
let value = entry.startTime;

if (type === "measure") {
value = entry.duration;
}
sendToAnalytics(window.location.href, entry.name, value);
}
}
});
}).observe({
type,
/* `buffered: true` catches entries that happened
* before the observer was registered, as well as future events
*/

buffered: true
});
} catch (e) {
console.error(e);
}
});
};

Final Considerations

You can see a demo of how all of this is put together in this CodeSandbox demo. If you open your console, you will see the events getting recorded by our mock function. Unfortunately, CodeSandbox doesn't include the Server-Timing header, but if it did, it would pick up the data.

With application-specific data, you can use it to help form a complete picture of how users are experiencing a site. This can be used to inform your decisions on where to focus performance efforts or how to build a case for performance.

It's worth noting that I include the part of the script that registers the PerformanceObserver and sends the data to analytics from a deferred script file i.e. one with the defer attribute or type="module", meaning. I might miss some data this way from visitors who didn't stick around long enough for the script to run, but I'd rather do that then load the script in a non-deferred way and have it block rendering.

This type of data is a permanent fixture in my analytics reporting because it helps to track performance over time.

About Ethan Gardner

Ethan Gardner is a full stack developer with nearly 20 years of experience who has worked in the media, advertising, and software industries as an individual contributor and team lead. Most recently, he has been developing microservices on the edge and focused on web performance optimization.

Website/Blog: ethangardner.com
Twitter: @ethangardner