Notes from Web Performance Fundamentals v1 - Frontend Masters
9/15/2024
Teacher: Todd Gardner, TrackJS
Course Published: March 23, 2021
Overview
Todd walks us through why Google introduced new metrics for page ranking and how to build websites that nail them.
Why is performance important?
The data:
- every 1s improvement = >2% increase revenue
- 2.4s load = ~2% greater conversion rate
The cold hard facts:
- Google says so
- Search rankings are created based on performance
- Frustrated users don’t stick around
What does fast mean? Psychology of waiting
Wait time is subjective, perceived performance varies based on context:
- People want to start immediately
- Bored waits feel slower
- Anxious waits feel slower
- Unexplained waits feel slower
- Uncertain waits feel slower
- People will wait longer for value
Measuring web performance
Old way: Page load is antiquated because load was gamed. Techniques to lazy load flourished, i.e. paint as little as possible until after the load happens. New way: Web vitals- FCP, LCP, FID, CLS
First Contentful Paint (FCP) - time from empty to the initial rendered content, when user gets an indication that the page is loading
Largest Contentful Paint (LCP) - time until the user sees most of the page is ready
Cumulative Layout Shift (CLS) - distance of moving page elements during the lifetime of the page
First Input Delay (FID) - time between the user’s first interaction with the page and the execution of the event
Lighthouse - Chrome Dev Tools performance report generator
Lighthouse is relative to your local machine, so it’s not conclusive on what a random user would see but rather what a user like yourself would while interacting with the page.
√ clear storage
√ wipe cookies
√ simulate throttling (1x turndown of hardware)
√ Device - Desktop
√ Keep in the foreground while running report
Where should we measure performance?
Lab data - how site performs in best possible circumstance, like Lighthouse connecting to a development server, low signal, low noise
Synthetic data - your own servers test in a simulated environment, New Relic, Pingdom, and other tools help set up these tests, more signal, more noise
RUM (field data) - real user monitoring tool, lets developers see what actual users see, high signal, high noise, requestmetrics.com
- https://crux-compare.netlify.app/ - Chrome has been gathering performance data on sites for years and this app exposes that Chrome user data Web Vitals aggregated to a single point per month
- field averages can be misleading so percentiles are a better metric (p75 or 75% of users have a better experience than this)
- p95 - what the worst users experience
Improving web metrics
Web business objectives
- awareness
- retention
- conversion
- competition
Speed must be 20% faster than competition for users to notice
Some of the most import metrics for determining performance are actually business metrics like bounce rate or session time
- need to gather real data to test hypothesis
window.performance
window.performance
- - gets a list of timing entries that closely resemble the network tab of Chrome dev tools
performance.getEntries()
Capture perf data
It’s worth noting here that Todd wrote some wonderfully helpful spreadsheets that are only available through Frontend Masters that allow you to capture performance data via console.log() and copy and paste that data for easy visualization.
- compare performance analytics against business data to find real-world issues
- CLS is often inversely correlated to session time
- the greater the layout shift, the faster the user leaves the page
- LCP is often inversely correlated to bounce rate
- “if it takes a long time for your page to load, then users are just going to leave”
- CLS is often inversely correlated to session time
These examples are not universal and it’s important to correlate your personal performance data to your business metrics.
How to improve perf?
Do fewer things
Improving First Contentful Paint (FCP)
Make your server quick. Size it correctly. Minimal processing. Proper bandwidth.
- smaller content size (HTML <80k, JPG/SVG ~1mb)
- compression
- shorter transmission distance (US east to west coast is 200-300ms)
- CDN - content delivery system, a network of machines designed to deliver content that is closest to the user
Improving Largest Contentful Paint (LCP)
- Defer resources until later
- Optimize images
- Reduce request overhead
- Defer resources that aren’t immediately needed
- scripts that finish fast still block other load events
async
- scripts loads but doesn’t execute until after page load
defer
<script src="/assets/js/banner.js" defer></script>
- order of loading
- put deferred scripts at end of
<body>
- put deferred scripts at end of
- Defer off-screen image loading using on
loading=“lazy”
tags<img>
- doesn’t work in Safari
- use Todd’s
data-src
replacer to only load images within the window viewportsrc
- Optimize images using next gen formats/compression
- use to load correctly sized image
srcset
- compress images using imageMin
- use
- reduce overhead with HTTP/2
- reuse connections so only one DNS/TCP/SSL per request
- available in apache and nginx
- SSL required
- caching headers
- - how long to hold onto this file in seconds
cache-control
- - do not keep after this datetime
expires
- - cookie-like string to serve a specific version of a file
etag
- preloading assets we know will be needed immediately
<link rel=“preconnect” href=“https://fonts.gstatic.com”>
<link rel=“preload” href=“/icons.css”>
Improving CLS
- if CLS happens before LCP it’s okay
- if CLS happens after LCP it’s frustrating
- layout hints
- find rendered size via devtools and give static assets height/width where appropriate
- add height: 100% to <img> to prevent stretching
- Fallback fonts to reduce shift once font loads
- https://screenspan.net/fallback
- https://meowni.ca/font-style-matcher/
Improving FID
How valuable is the wait? How anxious are they? Do they understand why they’re waiting?
Performance improvements & business projections
- Raise session time and lower bounce rate