Core Web Vitals in 2026: Understanding LCP, INP & CLS
Everything That Changed, Updated Thresholds, and How to Measure What Matters

If you’ve been building for the web for any amount of time, you know that performance isn’t a feature you ship once and forget. It’s a relentless moving target, and Google keeps updating how they measure it.
The Core Web Vitals that mattered in 2024 still matter in 2026, but the ecosystem around them has evolved in ways that are worth paying attention to.
I’ve been optimizing sites against these metrics since they were introduced, including this very site and various versions of it. Over that time, I’ve debugged countless performance issues, seen what actually moves the needle in production, and watched where most developers stumble.
In this two-part guide, I’ll walk you through the 2026 state of Core Web Vitals. This first part covers the metrics, what’s changed, and how to measure them. In Part 2, I cover optimization strategies, quick wins, and automated monitoring.
tipThis series is a companion to my existing Core Web Vitals optimization series. That series covers deep implementation techniques. This guide gives you the 2026 overview and gets you up to speed on what’s current.
The Three Core Web Vitals Explained
Let’s start with the metrics themselves. If you’re already familiar, skip ahead. If you’re here because someone told you “your Core Web Vitals are failing” and you’re not sure what that means, this section is for you.
Core Web Vitals are three specific metrics that Google uses to measure real-world user experience:
LCP: Largest Contentful Paint
What it measures: How long it takes for the biggest visible element on the page to finish rendering. Think hero images, above-the-fold headings, or large text blocks.
The thresholds:
- Good: ≤ 2.5 seconds
- Needs improvement: 2.5s – 4.0s
- Poor: > 4.0 seconds
Why it matters: LCP is the metric users feel most directly. When you click a link and stare at a blank or half-loaded page for three seconds, that’s a poor LCP experience. Users associate slow LCP with “this site is broken” and bounce.
real footage of your users when the site takes more than a second to load
The most common culprits:
- Unoptimized hero images (wrong format, too large, no responsive sizing)
- Render-blocking CSS and JavaScript in the
<head> - Slow server response time (high TTFB)
- Web fonts that block text rendering
- Third-party scripts that compete for bandwidth
bonusI have articles on each of these LCP culprits with specific optimization techniques. Read or skim through this article to find whichever link you need for deep dives or visit the series page to find the series on core web vitals.
I’m going to cover all of those here nonetheless.
INP: Interaction to Next Paint
What it measures: How quickly your page responds when a user interacts with it. Clicks, taps, key presses — INP tracks them all and reports a value close to the worst interaction during the visit.
The thresholds:
- Good: ≤ 200 milliseconds
- Needs improvement: 200ms – 500ms
- Poor: > 500 milliseconds
Why it matters: This is the metric that replaced First Input Delay (FID) in March 2024, and it’s significantly harder to pass. FID only measured the first interaction. INP measures every interaction and picks near the worst. A page that responds instantly to the first click but freezes on the third click would have passed FID but fails INP.
Two full years into INP being a Core Web Vital, the data is clear: sites that struggle with INP almost always have a JavaScript problem. Heavy frameworks, unoptimized event handlers, and long tasks on the main thread are the usual suspects.
The most common culprits:
- Long JavaScript tasks (> 50ms) blocking the main thread
- Heavy framework hydration (React, Vue, Angular initial render)
- Layout thrashing in event handlers (reading layout properties, then writing, then reading again)
- Synchronous third-party scripts (analytics, ads, chat widgets)
- Unthrottled scroll and resize handlers
CLS: Cumulative Layout Shift
What it measures: How much the visible content shifts around unexpectedly while the page loads and during interaction. Every time an element moves without user input, that’s a layout shift.
The thresholds:
- Good: ≤ 0.1
- Needs improvement: 0.1 – 0.25
- Poor: > 0.25
Why it matters: There’s nothing more frustrating than reaching for a button and having it jump away because an ad loaded above it. CLS captures that frustration as a number. It’s a dimensionless score based on how far elements move and how much of the viewport is affected.
The most common culprits:
- Images and videos without explicit
widthandheightattributes - Ads, embeds, and iframes that load after initial render without reserved space
- Web fonts causing text to reflow (FOIT/FOUT)
- Dynamically injected content above existing content
- Late-loading CSS that changes element dimensions
What Changed in 2026
The headline: the threshold numbers are stable, which means no surprise emergency rewrites needed.
But the ecosystem around them? That’s evolved in ways that matter to how you should approach optimization.
INP Maturity: Two Years of Real-World Data
INP is no longer the scary new metric. The pattern is ruthlessly clear. Sites that handle the happy path perfectly will still blow past 500ms on their third user interaction. INP is unforgiving that way. It measures near the worst interaction during a visit, not just the first one.
- The 75th percentile measurement means your worst interactions matter. You can’t just optimize the happy path. That one interaction where the user clicks a filter and the UI freezes for 400ms? That’s your INP.
- Single-page applications (SPAs) consistently struggle more than server-rendered or statically generated sites. Client-side routing, virtual DOM diffing, and heavy state updates all add up.
- Third-party scripts are often the biggest culprits. Analytics, chat widgets, consent banners, and ad scripts frequently cause long tasks that block user interactions.
New Browser APIs That Help
Several browser APIs have matured since 2024 that directly help with Core Web Vitals:
scheduler.yield() — This is probably the single most useful API for INP optimization. It lets you explicitly yield control back to the browser in the middle of a long task, allowing it to process pending user interactions before resuming your work:
async function processLargeDataset(items) {
for (const item of items) {
processItem(item);
// Yield to the browser every iteration
// so user interactions aren't blocked
await scheduler.yield();
}
}content-visibility: auto — Tells the browser to skip rendering for off-screen elements until they’re needed. This can dramatically reduce initial render cost for long pages:
.article-section {
content-visibility: auto;
contain-intrinsic-size: 0 500px;
}View Transitions API — While not directly a CWV optimization, view transitions can make page navigations feel smoother, reducing perceived loading time and preventing layout shifts during multi-page navigation.
fetchpriority attribute — Now widely supported, this attribute lets you tell the browser which resources to prioritize:
<!-- Prioritize the hero image -->
<img src="hero.webp" fetchpriority="high" alt="..." />
<!-- Deprioritize below-the-fold images -->
<img src="footer-logo.webp" fetchpriority="low" alt="..." />Chrome User Experience Report (CrUX) Evolution
CrUX, which provides the field data Google uses for ranking decisions, now provides more granular data than ever. You can query URL-level CWV data (not just origin-level), and the BigQuery dataset includes breakdowns by connection type and device class. This makes it easier to identify which users are experiencing poor performance.
I personally haven’t gotten the chance to explore this new granularity in depth yet, but it sounds like a game-changer for diagnosing performance issues that only affect certain segments of your audience.
How to Measure Core Web Vitals
There are two fundamentally different ways to measure these metrics, and understanding the difference is critical.
Lab Tools (Synthetic Testing)
Lab tools run in a controlled environment with a simulated device and network:
- Lighthouse (Chrome DevTools, CLI, or CI) — the most common starting point. Run an audit, get scores and specific recommendations.
- PageSpeed Insights — Google’s online tool that combines a Lighthouse audit with CrUX field data for the URL.
- WebPageTest — More detailed than Lighthouse, with filmstrip views and waterfall charts. Great for diagnosing exactly when things happen during a page load.
Lab tools are excellent for debugging. You can reproduce issues, test changes, and compare before/after. But they don’t represent real users.
Field Tools (Real User Data)
Field tools collect data from actual users visiting your site:
- Chrome User Experience Report (CrUX) — Google’s own dataset of real Chrome users. This is what Google uses for ranking decisions. Available through PageSpeed Insights, Search Console, and BigQuery.
- Google Search Console — The Core Web Vitals report shows which URLs pass or fail based on CrUX data, grouped by similar pages.
web-vitalsJavaScript library — Google’s official library for measuring CWV in your own analytics. Lightweight and accurate.
import { onLCP, onINP, onCLS } from 'web-vitals';
onLCP(metric => sendToAnalytics('LCP', metric));
onINP(metric => sendToAnalytics('INP', metric));
onCLS(metric => sendToAnalytics('CLS', metric));importantIf your Lighthouse score is green but Search Console shows red, trust Search Console. Field data from real users on real devices with real network conditions is what Google ranks you on. I covered this lab-vs-field gap in more detail in my article on Advanced Core Web Vitals optimization.
The Lab vs Field Gap
Here’s a frustrating scenario you may have encountered or will encounter when starting out with web vitals: You run a perfect Lighthouse audit on your slick MacBook Pro with a fast connection, get a green 100, ship it with confidence, then Search Console shows your real users on mid-range Android phones having a terrible experience.
That’s the lab-vs-field gap in action.
This trips up a lot of developers because the gap is counterintuitive. You did everything right in the lab, so what gives?
- Lighthouse simulates a fixed CPU and network throttle, but real devices vary wildly
- Lab tests are single visits to a cold page; real users have different cache states, browser extensions, and network conditions
- Lab tests don’t capture interactions (until recently), so INP was invisible in Lighthouse, though the latest versions now include an experimental INP assessment
Always measure both. Use lab tools for debugging, use field tools for the truth.
That covers the fundamentals: what the three Core Web Vitals measure, their thresholds, what’s evolved in the 2026 ecosystem, how to measure them, and why field data is the source of truth.
In Part 2: Optimization Strategies & Quick Wins, I cover the actionable side — quick wins for passing each metric in 30 minutes, CDN and infrastructure strategies, the real SEO impact of Core Web Vitals, a full case study of optimizing this site, and how to set up automated performance monitoring so regressions never sneak past you.
This article is also part of my Guide to Improving Page Performance series. For deeper optimization techniques, see Part 1: Real-World Optimization Strategies, Part 2: Image Optimization for the Modern Web, and Part 3: Advanced Core Web Vitals.





