Free Core Web Vitals Checker

Enter any URL and measure LCP, INP, CLS, FCP, and TTFB instantly. Real-user field data from the Chrome UX Report alongside lab data from Lighthouse. Mobile and desktop. See your score and get a ranked list of specific fixes.

Check Your Core Web Vitals

Powered by Google PageSpeed Insights. Tests against real-user data and a full Lighthouse run.

Running a live Lighthouse test. This can take 30 to 60 seconds...

0 performance score
Field data (real users)
Real-user metrics from the last 28 days
Get a free personalized performance evaluation with specific fixes.

You are all set. Check your inbox.

I will personally review your site and follow up within 1-2 business days.

Top Opportunities

Why Core Web Vitals Matter
40%
of mobile visitors abandon a page that takes longer than 3 seconds to load. Core Web Vitals are a direct Google ranking signal and they are the fastest-decaying asset on the modern web. Ignore them and you bleed both traffic and conversions.

What Core Web Vitals Actually Measure

Core Web Vitals are Google's attempt to quantify the three things that matter most to a real user: how fast the main content shows up, how responsive the page feels when you interact with it, and how much the layout jumps around during load.

LCP — Largest Contentful Paint

The time it takes for the largest visible element above the fold to render. Usually a hero image, a video thumbnail, or a block of headline text. Target: under 2.5 seconds. This is the metric that most directly correlates with the feeling of a "slow" or "fast" site.

INP — Interaction to Next Paint

How long it takes the page to visually respond after a user clicks, taps, or presses a key. Target: under 200 milliseconds. INP replaced First Input Delay in March 2024 because FID only measured the first interaction. INP measures every interaction across the session and reports the worst.

CLS — Cumulative Layout Shift

A unitless score that captures how much visible content unexpectedly shifts during loading. Target: under 0.1. Every time an image pops in without reserved space, a font swaps and reflows text, or an ad injects and pushes content down, CLS goes up. Users hate this, and so does Google.

Field Data vs Lab Data

The checker shows both. They measure different things and both matter.

Field data (CrUX)

Real users, real devices, real networks. Aggregated over the last 28 days by Chrome and published as the Chrome UX Report. This is what Google actually uses as a ranking signal. Field data includes INP (lab data cannot measure INP because there are no real interactions during a test run). Sites with low traffic may not have enough data for a CrUX record.

Lab data (Lighthouse)

One synthetic test run in a controlled environment. Simulated mid-tier mobile hardware on a simulated Slow 4G connection. Every visit is identical, which makes it great for tuning and comparison but unrepresentative of what your actual users experience. Lab data does not include INP — it reports Total Blocking Time instead, which correlates but is not the same metric.

Which one to trust

Field data when it is available. Fix lab data when field data is not yet populated or after major changes that have not had 28 days to roll through CrUX. A site can have green lab scores and red field scores — that means synthetic runs are fast but real users on real devices are having a worse experience. Fix the field.

Why Core Web Vitals Affect Your Rankings

Google confirmed Page Experience as a ranking signal in 2021 and rolled the Core Web Vitals into it. They are not the biggest ranking factor — relevance and authority still dominate — but they are the tiebreaker between otherwise similar pages.

The way Google frames it: when two pages have comparable content quality, the faster, more stable one wins. When a page is significantly slower than competitors in the same space, it drops. The penalty is gradual, not cliff-edged, which is why some site owners do not notice they have been slipping.

Beyond rankings, there is the bounce rate effect. Sites that fail LCP see dramatically higher exits before the content even loads. A page that does not reach LCP in 4 seconds loses roughly one in three visitors before a single pixel of content renders. That is traffic you paid to acquire, walking away before they can convert.

And unlike content quality or backlinks, Core Web Vitals are mostly fixable. Most performance issues trace back to a handful of known patterns: unoptimized images, render-blocking CSS, heavy third-party scripts, and poor asset caching. Technical SEO work that hits these four categories usually moves scores into the green within a single release cycle.

How to Read Your Results

The tool returns two different views of your site. Here is what to focus on.

The performance score

A number from 0 to 100 that weights all the individual metrics into a single figure. 90 and up is passing. 50 to 89 means there is meaningful headroom. Below 50 means performance is likely a significant ranking drag. The score is the lab score from Lighthouse — useful as a single headline number, but do not tune purely against it without also checking the field metrics.

The five metric cards

LCP, INP, CLS, FCP, and TTFB. Each shows the current value, whether it came from field data or lab, and the target threshold. Cards color-coded green are passing, amber are needs-improvement, red are failing. Focus on red cards first — they are the biggest ranking risk and usually the lowest-hanging fix.

The opportunities list

Ranked by potential time savings. The top item is where the most speed is hiding. Common winners: "Eliminate render-blocking resources," "Properly size images," "Reduce unused JavaScript," "Serve images in next-gen formats." Each includes an explanation and an estimate of seconds saved if fixed.

Mobile vs desktop

Run both. Mobile is what Google uses for ranking in most cases, so the mobile score matters more. But desktop often surfaces different issues — heavier JS, larger images, more third-party tags. Mobile's constraints hide them.

Fixing Each Metric

Every metric has its own distinct failure pattern and its own toolkit for fixing it. Here is the short version of what actually works.

Fixing LCP

The usual culprits are slow server response, render-blocking CSS, and unoptimized hero images. The highest-leverage fixes: preload the largest image, serve it in WebP or AVIF with a correctly sized variant for the viewport, move critical CSS inline, and defer anything not needed above the fold. Most sites see LCP drop by one to two full seconds after these changes alone.

Fixing INP

INP fails when the main thread is blocked on a long JavaScript task. Split large bundles, defer non-critical scripts, audit third-party tags (ad libraries, chat widgets, and heatmap trackers are the worst offenders), and move expensive work off the main thread with Web Workers where it makes sense. The goal is sub-50ms event handlers.

Fixing CLS

The fixes here are mostly CSS hygiene. Set explicit width and height on every image and iframe. Use aspect-ratio for fluid containers. Use font-display: optional or preload fonts to avoid late swaps. Reserve exact space for ads, embeds, and dynamic modules before they load. CLS fixes rarely take hours — they take minutes once you know where to look.

When to Rescan and What to Expect

Performance work is iterative. Here is how to interpret what happens after you make a change.

Lab data updates immediately

Every scan runs a fresh Lighthouse pass, so lab data reflects the current state of the site the moment you deploy. Use lab scores to confirm a fix actually took effect. If you shipped an image optimization and LCP did not move in lab, the change is not live — check your cache, CDN, or build output before assuming the fix did not work.

Field data lags 28 days

CrUX is a 28-day rolling window. Even if you fix CLS perfectly today, the field score blends the next 28 days of data with the previous 28. Expect meaningful field movement over roughly two to four weeks, with full stabilization around day 30.

Scores can move without code changes

Traffic composition changes field scores. If you suddenly get more mobile users on slow connections (a viral post, a new region, a newsletter send), your field scores may drop even though the code is identical. Check both your scores and your audience mix when investigating unexpected shifts.

Pair with other tools

Use this checker for fast spot-checks. For deeper analysis, pair it with the SEO Health Checker and the AI Discoverability Checker to get the full technical picture. For site-wide monitoring, Google Search Console's Core Web Vitals report is the official source.

Frequently Asked Questions

What are Core Web Vitals in plain language?

They are three user-experience metrics Google uses as a ranking signal: Largest Contentful Paint measures how fast the main content shows up, Interaction to Next Paint measures how responsive the page feels when you click or tap, and Cumulative Layout Shift measures how much the layout jumps around while loading. All three matter to real humans — Google started using them as a ranking factor because they correlate strongly with whether users stick around or bounce.

What is a good LCP score?

Largest Contentful Paint should complete in under 2.5 seconds on a real user device. Between 2.5 and 4 seconds is considered "needs improvement." Over 4 seconds is poor. These thresholds are measured at the 75th percentile, so the goal is to hit the target for three out of four of your users.

What is INP and why did it replace FID?

Interaction to Next Paint measures how long it takes the page to visually respond after a user clicks, taps, or presses a key. First Input Delay only measured the first interaction of a session, which missed the majority of responsiveness issues. INP measures every interaction across the session and reports the worst one. Google swapped FID for INP as a Core Web Vital in March 2024. Target INP is under 200 milliseconds.

What actually causes layout shift?

Layout shift is mostly caused by images and iframes without explicit width and height, fonts that swap and reflow text after load, ads or embeds that inject above existing content, and animations that move elements without using transform. Setting explicit dimensions and reserving space for dynamic content is usually a one-afternoon fix that drops CLS into the green zone.

What is the difference between field data and lab data?

Field data comes from real users on real devices over the previous 28 days and is published as the Chrome User Experience Report. It is what Google actually uses for ranking. Lab data comes from a single synthetic test run in a controlled environment by Lighthouse. Field data is authoritative but can lag behind recent code changes; lab data reflects the current state of the site instantly but is not what Google ranks against.

How quickly do fixes show up after deploying?

Lab data updates immediately — every scan runs a fresh Lighthouse pass, so you can confirm a fix took effect within minutes of deploying. Field data is a 28-day rolling window, so expect meaningful movement over two to four weeks with full stabilization around day 30. If lab data does not move after a deploy, the fix did not land — check your build output, CDN cache, or deployment logs.

Stuck Below 90?

Core Web Vitals fixes are fiddly and compounding. If the tool flagged issues you do not want to chase yourself, book a call and I will fix them. Most sites hit passing thresholds within one or two sprints.

Book a Call